[MERGE] updated benchmark display
holger krekel
holger at merlinux.de
Wed Aug 16 21:59:45 BST 2006
Hi Robert,
On Tue, Aug 15, 2006 at 12:02 +1000, Robert Collins wrote:
> There are some general stylistic issues with the patch:
> - please use self.assertTHING rather than 'assert' in the test suite.
> We like our tests to run with -O [in principal]. More importantly is the
> consistency with the rest of bzrlib and the improved diagnostics we get.
> - please put docstrings or comments in your tests explaining what and
> why you are testing. They are rather bare at the moment.
> - please put docstrings on your functions outside the test suite too!
> - you have some duplicate code fragments in the tests that could well
> be factored out - making the tests easier to read.
> - please put docstrings on classes. These are really essential when
> trying to understand how you are intending to tie the various components
> together.
please find a much updated diff attached and some
selected inline comments further below. Our branch
is also available at
http://codespeak.net/bzr/benchmark-display.merge1
Note that we did some more cleanups and refactorings resulting
from your review comments and discussions with Martin and
others on #bzr. Hope we are getting closer for merge :)
> I think that the benchmark page generator could well be a hidden
> command, that way theres no new binary to install when bzr is packaged,
> and there would be less boilerplate needed.
Right, the other possibility might be to go for a plugin.
But let's rather not tackle this issue within this merge request, ok?
> Also, as I mention below, I think doing an external css would make
> sense, and make it cleaner in terms of code.
IIRC, the idea was to have everything in one page, but css is
admittedly a corner case. So we now moved the css code to
tools/benchmark_report.css and benchmark_report.html links there.
> I haven't looked at the sample output yet, but I'll do that once the
> current issues are worked through.
You can find sample data here
http://merlinux.de/~hpk/revs109_dates436_3percent.txt
which you feed to the tools/benchmark_report.py script
in order to get something like:
http://merlinux.de/~hpk/benchmark_report.html
> I think that this could be clearer with shorter sample data -
> """package.module.test_name 100ms
> ...
> """
> would be enough to test with, but a lot easier to read.
At least 2 lines with data are required to check the handling of empty
lines. We reduced the test data to two lines.
>> +class PerfResultDelta:
>> + stud_num = 100
>> + stude_t = [[1.282, 1.645, 1.960, 2.326, 2.576, 3.090], # inf
>> + [3.078, 6.314, 12.706, 31.821, 63.657, 318.313], #1.
> ...
>> + [1.290, 1.660, 1.984, 2.364, 2.626, 3.174] # 100.
>> + ]
>
> Whats this table for ? Do we need it embedded in the source, or can we
> generate it ?
> ...
It's a standard table for the Student's T distribution.
However, we decided after some more discussion (also on #bzr)
to drop this statistical analysis - it's not clear how
fitting it is for measuring benchmarking speed, anyway.
best,
holger
P.S.: Carl Friedrich is ill today, hopefully better tomorrow.
-------------- next part --------------
=== added file 'bzrlib/tests/test_benchmark_report.py'
--- bzrlib/tests/test_benchmark_report.py 1970-01-01 00:00:00 +0000
+++ bzrlib/tests/test_benchmark_report.py 2006-08-16 20:22:14 +0000
@@ -0,0 +1,466 @@
+import os
+from bzrlib.tests import TestCase, TestCaseInTempDir
+from tools import pyxml
+from tools.benchmark_report import (
+ PerfResult, PerfTable, PerfResultCollection,
+ PerfResultDelta, Page
+)
+
+# Sample performance data with a newline
+testlines = """\
+--date 1152625530.0 hacker at canonical.com-20060705122759-0a3481a4647b16dc
+ 100ms package.module.test_name1
+
+ 200ms package.module.test_name2
+
+""".splitlines()
+
+class TestPerfTable(TestCase):
+ """tests to ensure that performance data files are processed properly"""
+
+ def setUp(self):
+ self.lines = testlines[:]
+ self.num_test_ids = 2
+
+ def test_parse(self):
+ """read one line and ensure the data is read correctly"""
+ lines = self.lines[:2]
+ t = PerfTable()
+ data = list(t.parse(lines))
+ self.assertEquals(len(data), 1) # check that one object was created
+
+ self.assertIsInstance(data[0], PerfResult) # is it really a PerfResult
+ pr = data[0]
+ expected = PerfResult(
+ date=1152625530.0,
+ revision_id='hacker at canonical.com-20060705122759-0a3481a4647b16dc',
+ elapsed_time=100,
+ test_id=('package.module.test_name1'),
+ )
+ self.assertEquals(pr.test_id, expected.test_id)
+ self.assertEquals(pr.revision_id, expected.revision_id)
+ self.assertEquals(pr.elapsed_time, expected.elapsed_time)
+ self.assertEquals(pr.date, expected.date)
+
+ def test_multiline_parse(self):
+ """ensure all lines are parsed correctly. """
+
+ lines= self.lines
+ results = list(PerfTable().parse(lines))
+ self.assertEquals(len(results), self.num_test_ids) # was all data read?
+ self.assertEquals(len(dict.fromkeys([r.revision_id for r in results])),
+ 1) # check for one unique revision id
+
+ # check for self.num_test_ids unique test ids
+ self.assertEquals(len(dict.fromkeys([r.test_id for r in results])),
+ self.num_test_ids)
+
+ def test_get_results(self):
+ """check that get_results returns the right number of PerfResults"""
+
+ perftable = PerfTable()
+ perftable.add_lines(self.lines)
+ results = list(perftable.get_results())
+ self.assertEquals(len(results), self.num_test_ids)
+ test_ids = perftable.list_values_of('test_id')
+ # check that get_result returns all results for an empty test_id argument
+ self.assertEquals(list(perftable.get_results()),
+ list(perftable.get_results(test_ids = test_ids)))
+ # check that get_results returns only the requested (number of) results
+ results = list(perftable.get_results(test_ids = test_ids[1:]))
+ self.assertEquals(len(results), self.num_test_ids -1)
+
+
+ def test_get_results_two_dates_same_revision_id(self):
+ """check that PerfTable handles 2 dates in the data correctly"""
+
+ # generate a second block of performance history data
+ # with the same length as the first block
+ new_date = float(self.lines[0].split(None, 2)[1]) + 1
+ new_line = '%s %s %s' % (
+ self.lines[0].split(None, 2)[0],
+ new_date,
+ self.lines[0].split(None, 2)[2],
+ )
+ lines = [new_line] + self.lines[1:] + self.lines
+
+ perftable = PerfTable(lines)
+ results = list(perftable.get_results())
+ self.assertEquals(len(results), 2* self.num_test_ids)
+
+ # get results for one (test_id, revision_id) pair
+ # and check that get_result returns a result object for each date
+ test_ids = perftable.list_values_of('test_id')[:1]
+ revision_ids = perftable.list_values_of('revision_id')[:1]
+ results = list(perftable.get_results(test_ids = test_ids,
+ revision_ids = revision_ids))
+ self.assertEquals(len(results), 1+1)
+ r0 = results[0]
+ r1 = results[1]
+ self.assertNotEquals(r0, r1)
+ #check that r0 and r1 are equal, except the date is different
+ r0.date = r1.date = None
+ self.assertEquals(r0.test_id, r1.test_id)
+
+
+ def test_list_values_of(self):
+ perftable = PerfTable(self.lines)
+ self.assertEquals(perftable.list_values_of('date'), [1152625530.0])
+ self.assertEquals(perftable.list_values_of('revision_id'),
+ ['hacker at canonical.com-20060705122759-0a3481a4647b16dc'])
+ self.assertEquals(len(perftable.list_values_of('test_id')),
+ self.num_test_ids)
+ # check number of unique test ids
+ self.assertEquals(len(dict.fromkeys(
+ perftable.list_values_of('test_id')).keys()), 2)
+
+
+class TestPerfResultCollection(TestCase):
+ """check the average and variance computation"""
+
+ def test_property_elapsed_time(self):
+ """check elapsed_time computated property"""
+ p1 = PerfResult(elapsed_time=1, date=132.123)
+ p2 = PerfResult(elapsed_time=2, date=142.1)
+ sample = PerfResultCollection([p1,p2])
+ self.assertEquals(sample.min_elapsed, 1)
+
+
+class TestPerfResultDelta(TestCase):
+ """check delta computations and the statistical tests"""
+
+ def setUp(self):
+ self.r1 = PerfResult(date=123123.123, elapsed_time = 10)
+ self.r2 = PerfResult(date=123523.123, elapsed_time = 11)
+ self.c1 = PerfResultCollection([self.r1])
+ self.c2 = PerfResultCollection([self.r2])
+
+ def test_attributes(self):
+ """check delta computation"""
+ delta = PerfResultDelta(self.c1, self.c2)
+ self.assertEquals(delta.delta,
+ self.c2.min_elapsed - self.c1.min_elapsed)
+ self.assertAlmostEquals(delta.percent, 0.10)
+
+ def test_delta_is_computed_from_one_value_only(self):
+ delta = PerfResultDelta(self.c1, None)
+ self.assertEquals(delta.delta, 0)
+ self.assertEquals(delta.percent, 0.0)
+
+ delta = PerfResultDelta(None, self.c2)
+ self.assertEquals(delta.delta, 0)
+ self.assertEquals(delta.percent, 0.0)
+
+
+class TestPage(TestCaseInTempDir):
+ """check that every part of the page and the page itself can be
+ generated without errors"""
+
+ def setUp(self):
+ self.lines = testlines[:]
+ self.perftable = PerfTable(self.lines)
+
+ def check_serialize_html(self, html):
+ """render unicode string from the given 'html' tree.
+
+ :html: pyxml html object tree
+ """
+ x = unicode(html)
+
+ def test_gen_image_map(self):
+ """check image map for 20 Perf Results"""
+ samples = [
+ PerfResultCollection(
+ [PerfResult(
+ elapsed_time=i,
+ test_id='test_id',
+ revision_id='revision %s' % (i,),
+ revision=i
+ )
+ ]
+ ) for i in range(20)
+ ]
+ x = Page().gen_image_map(samples, revisions=range(20))
+ self.check_serialize_html(x)
+
+ def test_report(self):
+ """check revision report showing changes to prev revision"""
+
+ p1 = [PerfResultCollection([PerfResult(elapsed_time=i,
+ date=float(i),
+ revision = 1,
+ test_id='test123',
+ revision_id='one')])
+ for i in range(1, 4)]
+ p2 = [PerfResultCollection([PerfResult(elapsed_time=i,
+ date=float(i),
+ test_id='test123',
+ revision = 2,
+ revision_id='two')])
+ for i in range(2, 5)]
+
+ deltas = [PerfResultDelta(*pair) for pair in zip(p1, p2)]
+ self.check_serialize_html(Page().render_report(deltas))
+
+ def test_header(self):
+ """check header generation"""
+ p1 = PerfResultCollection([PerfResult(revision_date=124.8,
+ nick="hello",
+ revision=100)])
+ p2 = PerfResultCollection([PerfResult(revision_date=12456.3,
+ nick="hello",
+ revision=200 )])
+ self.check_serialize_html(Page().render_header(
+ p1.results[0], p2.results[0]))
+
+ def test_table(self):
+ """check main reporting table generation"""
+ p1 = PerfResultCollection(
+ [PerfResult(
+ elapsed_time=2,
+ revision_date=124.8,
+ revision=100,
+ test_id=('bzrlib.benchmarks.bench_add.AddBenchmark.'
+ 'test_one_add_kernel_like_tree'),
+ )
+ ]
+ )
+ p2 = PerfResultCollection(
+ [PerfResult(
+ elapsed_time=3,
+ revision_date=12456.3,
+ revision=200,
+ test_id=('bzrlib.benchmarks.bench_add.AddBenchmark.'
+ 'test_one_add_kernel_like_tree'),
+ ),
+ ]
+ )
+
+ samples = [
+ PerfResultCollection(
+ [PerfResult(
+ elapsed_time=i,
+ test_id='test_id',
+ revision_id='revision%s' % (i,),
+ revision = i
+ ),
+ ],
+ ) for i in range(20)
+ ]
+
+ images = [Page().gen_image_map(samples, revisions=range(20))
+ for i in range(4)]
+
+ d1 = [PerfResultCollection(
+ [PerfResult(
+ elapsed_time=i,
+ date = 130.0,
+ test_id='test123',
+ revision=1,
+ revision_id='one',
+ ),
+ ],
+ ) for i in range(1, 4)
+ ]
+ d2 = [PerfResultCollection(
+ [PerfResult(
+ elapsed_time=i,
+ date = 140.0,
+ test_id='test123',
+ revision=2,
+ revision_id='two',
+ ),
+ ],
+ ) for i in range(2, 5)
+ ]
+ deltas = [PerfResultDelta(*pair) for pair in zip(d1, d2)]
+ p = Page().render_table(deltas, images)
+ self.check_serialize_html(p)
+
+ def test_page_rendering_on_sample_dataset(self):
+ perftable = PerfTable(testdata.splitlines(), path_to_branch='.')
+ page = Page(perftable).render()
+ self.check_serialize_html(page)
+
+testdata = """
+
+--date 1151350547.87 pqm at pqm.ubuntu.com-20060626193547-43661d1377f72b4d
+ 2744ms bzrlib.benchmarks.bench_add.AddBenchmark.test_one_add_kernel_like_tree
+
+ 3406ms bzrlib.benchmarks.bench_bench.MakeKernelLikeTreeBenchmark.test_make_kernel_like_tree
+
+13492ms bzrlib.benchmarks.bench_checkout.CheckoutBenchmark.test_build_kernel_like_tree
+
+27762ms bzrlib.benchmarks.bench_commit.CommitBenchmark.test_commit_kernel_like_tree
+
+ 155ms bzrlib.benchmarks.bench_inventory.InvBenchmark.test_make_10824_inv_entries
+
+ 58ms bzrlib.benchmarks.bench_log.LogBenchmark.test_cmd_log
+
+ 338ms bzrlib.benchmarks.bench_log.LogBenchmark.test_cmd_log_subprocess
+
+ 393ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log
+
+ 54ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful
+
+ 37ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful_line
+
+ 26ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful_short
+
+ 687ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_verbose
+
+ 98ms bzrlib.benchmarks.bench_log.LogBenchmark.test_merge_log
+
+ 104ms bzrlib.benchmarks.bench_osutils.WalkDirsBenchmark.test_walkdirs_kernel_like_tree
+
+ 283ms bzrlib.benchmarks.bench_rocks.RocksBenchmark.test_rocks
+
+ 2222ms bzrlib.benchmarks.bench_status.StatusBenchmark.test_no_changes_known_kernel_like_tree
+
+ 1075ms bzrlib.benchmarks.bench_status.StatusBenchmark.test_no_ignored_unknown_kernel_like_tree
+
+ 510ms bzrlib.benchmarks.bench_transform.TransformBenchmark.test_canonicalize_path
+
+ 406ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_is_ignored_10824_calls
+
+ 3ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_is_ignored_single_call
+
+ 294ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_list_files_kernel_like_tree
+
+ 1081ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_list_files_unknown_kernel_like_tree
+
+--date 1152797874.33 pqm at pqm.ubuntu.com-20060713133754-64c134fffd39fd99
+ 2588ms bzrlib.benchmarks.bench_add.AddBenchmark.test_one_add_kernel_like_tree
+
+ 3467ms bzrlib.benchmarks.bench_bench.MakeKernelLikeTreeBenchmark.test_make_kernel_like_tree
+
+13637ms bzrlib.benchmarks.bench_checkout.CheckoutBenchmark.test_build_kernel_like_tree
+
+28816ms bzrlib.benchmarks.bench_commit.CommitBenchmark.test_commit_kernel_like_tree
+
+ 207ms bzrlib.benchmarks.bench_inventory.InvBenchmark.test_make_10824_inv_entries
+
+ 54ms bzrlib.benchmarks.bench_log.LogBenchmark.test_cmd_log
+
+ 340ms bzrlib.benchmarks.bench_log.LogBenchmark.test_cmd_log_subprocess
+
+ 396ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log
+
+ 54ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful
+
+ 41ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful_line
+
+ 26ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful_short
+
+ 691ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_verbose
+
+ 98ms bzrlib.benchmarks.bench_log.LogBenchmark.test_merge_log
+
+ 105ms bzrlib.benchmarks.bench_osutils.WalkDirsBenchmark.test_walkdirs_kernel_like_tree
+
+ 280ms bzrlib.benchmarks.bench_rocks.RocksBenchmark.test_rocks
+
+ 2229ms bzrlib.benchmarks.bench_status.StatusBenchmark.test_no_changes_known_kernel_like_tree
+
+ 1309ms bzrlib.benchmarks.bench_status.StatusBenchmark.test_no_ignored_unknown_kernel_like_tree
+
+ 504ms bzrlib.benchmarks.bench_transform.TransformBenchmark.test_canonicalize_path
+
+ 31ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_is_ignored_10824_calls
+
+ 0ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_is_ignored_single_call
+
+ 294ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_list_files_kernel_like_tree
+
+ 380ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_list_files_unknown_kernel_like_tree
+
+--date 1152818329.6 pqm at pqm.ubuntu.com-20060713191849-c0cbdf94d208fa69
+ 1982ms bzrlib.benchmarks.bench_add.AddBenchmark.test_one_add_kernel_like_tree
+
+ 3748ms bzrlib.benchmarks.bench_bench.MakeKernelLikeTreeBenchmark.test_make_kernel_like_tree
+
+13696ms bzrlib.benchmarks.bench_checkout.CheckoutBenchmark.test_build_kernel_like_tree
+
+28351ms bzrlib.benchmarks.bench_commit.CommitBenchmark.test_commit_kernel_like_tree
+
+ 207ms bzrlib.benchmarks.bench_inventory.InvBenchmark.test_make_10824_inv_entries
+
+ 53ms bzrlib.benchmarks.bench_log.LogBenchmark.test_cmd_log
+
+ 334ms bzrlib.benchmarks.bench_log.LogBenchmark.test_cmd_log_subprocess
+
+ 397ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log
+
+ 55ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful
+
+ 41ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful_line
+
+ 26ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful_short
+
+ 692ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_verbose
+
+ 99ms bzrlib.benchmarks.bench_log.LogBenchmark.test_merge_log
+
+ 107ms bzrlib.benchmarks.bench_osutils.WalkDirsBenchmark.test_walkdirs_kernel_like_tree
+
+ 281ms bzrlib.benchmarks.bench_rocks.RocksBenchmark.test_rocks
+
+ 2296ms bzrlib.benchmarks.bench_status.StatusBenchmark.test_no_changes_known_kernel_like_tree
+
+ 1119ms bzrlib.benchmarks.bench_status.StatusBenchmark.test_no_ignored_unknown_kernel_like_tree
+
+ 503ms bzrlib.benchmarks.bench_transform.TransformBenchmark.test_canonicalize_path
+
+ 30ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_is_ignored_10824_calls
+
+ 0ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_is_ignored_single_call
+
+ 507ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_list_files_kernel_like_tree
+
+ 353ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_list_files_unknown_kernel_like_tree
+
+--date 1152977475.32 pqm at pqm.ubuntu.com-20060715153115-59f1601f31ecc38f
+ 1972ms bzrlib.benchmarks.bench_add.AddBenchmark.test_one_add_kernel_like_tree
+
+ 3512ms bzrlib.benchmarks.bench_bench.MakeKernelLikeTreeBenchmark.test_make_kernel_like_tree
+
+13703ms bzrlib.benchmarks.bench_checkout.CheckoutBenchmark.test_build_kernel_like_tree
+
+28276ms bzrlib.benchmarks.bench_commit.CommitBenchmark.test_commit_kernel_like_tree
+
+ 211ms bzrlib.benchmarks.bench_inventory.InvBenchmark.test_make_10824_inv_entries
+
+ 53ms bzrlib.benchmarks.bench_log.LogBenchmark.test_cmd_log
+
+ 333ms bzrlib.benchmarks.bench_log.LogBenchmark.test_cmd_log_subprocess
+
+ 397ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log
+
+ 56ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful
+
+ 41ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful_line
+
+ 26ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_screenful_short
+
+ 694ms bzrlib.benchmarks.bench_log.LogBenchmark.test_log_verbose
+
+ 100ms bzrlib.benchmarks.bench_log.LogBenchmark.test_merge_log
+
+ 110ms bzrlib.benchmarks.bench_osutils.WalkDirsBenchmark.test_walkdirs_kernel_like_tree
+
+ 271ms bzrlib.benchmarks.bench_rocks.RocksBenchmark.test_rocks
+
+ 2259ms bzrlib.benchmarks.bench_status.StatusBenchmark.test_no_changes_known_kernel_like_tree
+
+ 1548ms bzrlib.benchmarks.bench_status.StatusBenchmark.test_no_ignored_unknown_kernel_like_tree
+
+ 511ms bzrlib.benchmarks.bench_transform.TransformBenchmark.test_canonicalize_path
+
+ 30ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_is_ignored_10824_calls
+
+ 0ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_is_ignored_single_call
+
+ 296ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_list_files_kernel_like_tree
+
+ 387ms bzrlib.benchmarks.bench_workingtree.WorkingTreeBenchmark.test_list_files_unknown_kernel_like_tree
+"""
=== added file 'tools/benchmark_report.css'
--- tools/benchmark_report.css 1970-01-01 00:00:00 +0000
+++ tools/benchmark_report.css 2006-08-16 20:02:38 +0000
@@ -0,0 +1,63 @@
+ body {
+ font: Trebuchet MS, Verdana, Arial 0.9em black;
+ background-color: white;
+ }
+
+ table.main {
+ margin-top: 1.5em;
+ }
+
+ table {
+ margin-bottom: 1.5em;
+ }
+
+ table, td {
+ border: 1px solid black;
+ height: 1%;
+ }
+
+ td.testid {
+ font-weight: bold;
+ }
+
+td {
+ vertical-align: top;
+}
+
+ td div {
+ white-space: nowrap;
+ }
+
+ tr {
+ border-width: 0px;
+ }
+
+ td img {
+ border: 1px solid white;
+ }
+
+ .titleline {
+ font-size: 1.4em;
+ font-weight: bold;
+ }
+
+ ul {
+ margin: 0px;
+ padding: 0px;
+ }
+
+ li {
+ list-style-type: none;
+ padding: 0px;
+ margin: 0px;
+ }
+
+ .delta {
+ min-width: 13em;
+ }
+
+ .red {
+ background-color: red;
+ color: white;
+ }
+
=== added file 'tools/benchmark_report.py'
--- tools/benchmark_report.py 1970-01-01 00:00:00 +0000
+++ tools/benchmark_report.py 2006-08-16 20:49:42 +0000
@@ -0,0 +1,578 @@
+#!/usr/bin/env python
+
+import Image
+import ImageDraw
+import urllib
+import StringIO
+import math
+import sys
+import colorsys
+
+from bzrlib.branch import Branch
+from bzrlib.log import show_log, LongLogFormatter
+from bzrlib.osutils import format_date
+from bzrlib.errors import NotBranchError
+from pyxml import html as pyhtml
+
+
+class PerfTable:
+ """parses performance history data files and yields PerfResult objects
+ through the get_results method.
+
+ if an branch is given, it is used to get more information for each
+ revision we have data from.
+ """
+
+ def __init__(self, iterlines = [], path_to_branch=None):
+ """:param iterline: lines of performance history data,
+ e.g., history_file.realdlines()
+ """
+ self.path_to_branch = path_to_branch
+ self.branch = None
+ self._revision_cache = {}
+ if self.path_to_branch:
+ try:
+ self.branch, relpath = Branch.open_containing(path_to_branch)
+ except NotBranchError:
+ print self.path_to_branch, 'seems not to be a branch!'
+ print 'The data cannot be sorted properly,',
+ print ' aborting!'
+ sys.exit(-1)
+
+ self.results = list(self.parse(iterlines))
+
+ def parse(self, iterlines):
+ """parse lines like
+ --date 1152625530.0 hacker at canonical.com-20..6dc
+ 1906ms bzrlib....one_add_kernel_like_tree
+ """
+ if self.branch:
+ self.branch.lock_read()
+ date = None
+ revision_id = None
+ for line in iterlines:
+ line = line.strip()
+ if not line:
+ continue
+ if line.startswith('--date'):
+ _, date, revision_id = line.split(None, 2)
+ date = float(date)
+ continue
+ perfresult = PerfResult(date=date, revision_id=revision_id)
+ elapsed_time, test_id = line.split(None, 1)
+ perfresult.elapsed_time = int(elapsed_time[:-2])
+ perfresult.test_id = test_id.strip()
+ yield self.annotate(perfresult)
+ if self.branch:
+ self.branch.unlock()
+
+ def add_lines(self, lines):
+ """add lines of performance history data """
+
+ self.results += list(self.parse(lines))
+
+ def get_time_for_revision_id(self, revision_id):
+ """return the data of the revision or 0"""
+ if revision_id in self._revision_cache:
+ return self._revision_cache[revision_id][1].timestamp
+ return 0
+
+ def get_time(self, revision_id):
+ """return revision date or the date of recording the
+ performance history data"""
+
+ t = self.get_time_for_revision_id(revision_id)
+ if t:
+ return t
+ result = list(self.get_results(revision_ids=[revision_id],
+ sorted_by_rev_date=False))[0]
+ return result.date
+
+ def annotate(self, result):
+ """Try to put extra information for each revision on the
+ PerfResult objects. These informations are retrieved from a
+ branch object.
+ """
+ if self.branch is None:
+ return result
+ revision_id = result.revision_id
+ if revision_id in self._revision_cache:
+ revision, rev, nick = self._revision_cache[revision_id]
+ else:
+ revision = self.branch.revision_id_to_revno(revision_id)
+ rev = self.branch.repository.get_revision(revision_id)
+ nick = self.branch._get_nick()
+ self._revision_cache[revision_id] = (revision, rev, nick)
+
+ result.revision = revision
+ result.committer = rev.committer
+ result.message = rev.message
+ result.timstamp = rev.timestamp
+ result.revision_date = format_date(rev.timestamp, rev.timezone or 0)
+ result.nick = nick
+ return result
+
+ def get_results(self, test_ids=None, revision_ids=None,
+ sorted_by_rev_date=True):
+ # XXX we might want to build indexes for speed
+ for result in self.results:
+ if test_ids and result.test_id not in test_ids:
+ continue
+ if revision_ids and result.revision_id not in revision_ids:
+ continue
+ yield result
+
+ def list_values_of(self, attr):
+ """return a list of unique values of the specified attribute
+ of PerfResult objects"""
+ return dict.fromkeys((getattr(r, attr) for r in self.results)).keys()
+
+ def get_testid2collections(self):
+ """return a mapping of test_id to list of PerfResultCollection
+ sorted by revision"""
+
+ test_ids = self.list_values_of('test_id')
+
+ testid2resultcollections = {}
+ for test_id in test_ids:
+ revnos = {}
+ for result in self.get_results(test_ids=[test_id]):
+ revnos.setdefault(result.revision, []).append(result)
+ for revno, results in revnos.iteritems():
+ collection = PerfResultCollection(results)
+ l = testid2resultcollections.setdefault(test_id, [])
+ l.append(collection)
+ # sort collection list by revision number
+ for collections in testid2resultcollections.itervalues():
+ collections.sort(lambda x,y: cmp(x.revision, y.revision))
+ return testid2resultcollections
+
+
+class PerfResult:
+ """Holds information about a benchmark run of a particular test run."""
+
+ def __init__(self, date=0.0, test_id="", revision=0.0,
+ revision_id="NONE", timestamp=0.0,
+ revision_date=0.0, elapsed_time=-1,
+ committer="", message="", nick=""):
+ self.__dict__.update(locals())
+ del self.self
+
+
+class PerfResultCollection(object):
+ """Holds informations about several PerfResult objects. The
+ objects should have the same test_id and revision_id"""
+
+ def __init__(self, results=None):
+ if results is None:
+ self.results = []
+ else:
+ self.results = results[:]
+ self.check()
+
+ def __repr__(self):
+ self.check()
+ if not self.results:
+ return "<PerfResultCollection EMPTY>"
+ sample = self.results[0]
+ return "<PerfResultCollection test_id=%s, revno=%s>" %(
+ sample.test_id, sample.revision)
+
+ @property
+ def min_elapsed(self):
+ return self.getfastest().elapsed_time
+
+ def getfastest(self):
+ x = None
+ for res in self.results:
+ if x is None or res.elapsed_time < x.elapsed_time:
+ x = res
+ return x
+
+ @property
+ def test_id(self):
+ # check for empty results?
+ return self.results[0].test_id
+
+ @property
+ def revision_id(self):
+ # check for empty results?
+ return self.results[0].revision_id
+
+ @property
+ def revision(self):
+ # check for empty results?
+ return self.results[0].revision
+
+ def check(self):
+ for s1, s2 in zip(self.results, self.results[1:]):
+ assert s1.revision_id == s2.revision_id
+ assert s1.test_id == s2.test_id
+ assert s1.revision == s2.revision
+ assert s1.date != s2.date
+
+ def append(self, sample):
+ self.results.append(sample)
+ self.check()
+
+ def extend(self, results):
+ self.results.extend(results)
+ self.check()
+
+ def __len__(self):
+ return len(self.results)
+
+
+class PerfResultDelta:
+ """represents the difference of two PerfResultCollections"""
+
+ def __init__(self, _from, _to=None):
+ if _from is None:
+ _from = _to
+ if _to is None:
+ _to = _from
+ if isinstance(_from, list):
+ _from = PerfResultCollection(_from)
+ if isinstance(_to, list):
+ _to = PerfResultCollection(_to)
+ assert isinstance(_from, PerfResultCollection)
+ assert isinstance(_to, PerfResultCollection)
+ assert _from.test_id == _to.test_id, (_from.test_id, _to.test_id)
+ self._from = _from
+ self._to = _to
+
+ @property
+ def test_id(self):
+ return self._to.test_id
+
+ @property
+ def delta(self):
+ return self._to.min_elapsed - self._from.min_elapsed
+
+ @property
+ def percent(self):
+ m1 = self._from.min_elapsed
+ m2 = self._to.min_elapsed
+ if m1 == 0:
+ return 0.0
+ return float(m2-m1) / float(m1)
+
+
+class Page:
+ """generates a benchmark summary page
+ The generated page is self contained, all images are inlined. The
+ page refers to a local css file 'benchmark_report.css'.
+ """
+
+ def __init__(self, perftable=None):
+ """perftable is of type PerfTable"""
+ self.perftable = perftable
+
+ def render(self):
+ """return full rendered page html tree for the perftable."""
+
+ perftable = self.perftable
+ testid2collections = perftable.get_testid2collections()
+
+ # loop to get per-revision collection and the
+ # maximum delta revision collections.
+ maxdeltas = []
+ revdeltas = {}
+ start = end = None
+ for testid, collections in testid2collections.iteritems():
+ if len(collections) < 2: # less than two revisions sampled
+ continue
+ # collections are sorted by lowest REVNO first
+ delta = PerfResultDelta(collections[0], collections[-1])
+ maxdeltas.append(delta)
+
+ # record deltas on target revisions
+ for col1, col2 in zip(collections, collections[1:]):
+ revdelta = PerfResultDelta(col1, col2)
+ l = revdeltas.setdefault(col2.revision, [])
+ l.append(revdelta)
+
+ # keep track of overall earliest and latest revision
+ if start is None or delta._from.revision < start.revision:
+ start = delta._from.results[0]
+ if end is None or delta._to.revision > end.revision:
+ end = delta._to.results[0]
+
+ # sort by best changes first
+ maxdeltas.sort(key=lambda x: x.percent)
+
+ # generate revision reports
+ revno_deltas = revdeltas.items()
+ revno_deltas.sort()
+ revno_deltas.reverse()
+ revreports = []
+ for revno, deltas in revno_deltas:
+ # sort by best changes first
+ deltas.sort(key=lambda x: x.percent)
+ revreports.append(self.render_report(deltas))
+
+ # generate images
+ #
+ # generate the x axis, a list of revision numbers
+ xaxis = perftable.list_values_of('revision')
+ xaxis.sort()
+ # samples of tests in the order of max_deltas test_ids
+ samples = [testid2collections[delta.test_id] for delta in maxdeltas]
+ # images in the order of max_deltas test_ids
+ images = [self.gen_image_map(sample, xaxis) for sample in samples]
+
+ page = pyhtml.html(
+ pyhtml.head(
+ pyhtml.meta(
+ name="Content-Type",
+ value="text/html; charset=latin1",
+ ),
+ pyhtml.link(rel="stylesheet",
+ type="text/css",
+ href="benchmark_report.css")
+
+ ),
+ pyhtml.body(
+ self.render_header(start, end),
+ self.render_table(maxdeltas, images, anchors=False),
+ *revreports
+ )
+ )
+ return page
+
+ def _revision_report_name(self, sample):
+ """return anchor name for reports,
+ used to link from an image to a report"""
+ return 'revno_%s' % (sample.revision,)
+
+ def _revision_test_report_name(self, sample):
+ """return anchor name for reports,
+ used to link from an image to a report"""
+ return 'revno_%s_test_id_%s' % (sample.revision, sample.test_id)
+
+ def gen_image_map(self, samples, revisions=[]):
+ """return a tuple of an inlined image and the corresponding image map
+ samples is a list of PerfResultCollections
+ revisions is a list of revision numbers and represents the x
+ axis of the graph"""
+
+ revision2collection = dict(((s.revision, s) for s in samples))
+ revision2delta = dict()
+ for col1, col2 in zip(samples, samples[1:]):
+ revision2delta[col2.revision] = PerfResultDelta(col1, col2)
+ max_value = max([s.min_elapsed for s in samples])
+ map_name = samples[0].test_id # link between the image and the image map
+ if max_value == 0:
+ #nothing to draw
+ return (pyhtml.span('No value greater than 0'), py.html.span(''))
+
+ step = 3 # pixels for each revision on x axis
+ xsize = (len(revisions) - 1) * step +2
+ ysize = 32 # height of the image
+ im = Image.new("RGB", (xsize + 2, ysize), 'white')
+ draw = ImageDraw.Draw(im)
+
+ areas = []
+ for x, revno in enumerate(revisions):
+ if revno not in revision2collection: # data for this revision?
+ continue
+ sample = revision2collection[revno]
+ y = ysize - (sample.min_elapsed *(ysize -2)/max_value) #scale value
+ #draw.line((x*step, y, (x+1)*step, y), fill="#888888")
+ draw.rectangle((x*step+1, y, x*step + step -1, ysize),
+ fill="#BBBBBB")
+
+ head_color = "#000000"
+ if revno in revision2delta:
+ change = revision2delta[revno].percent
+ if change < -0.15:
+ head_color = "#00FF00"
+ elif change > 0.15:
+ head_color = "#FF0000"
+ draw.rectangle((x*step+1, y-1, x*step + step -1, y+1),
+ fill=head_color)
+
+ areas.append(
+ pyhtml.area(
+ shape="rect",
+ coords= '%s,0,%s,%s' % (x*step, (x+1)*step, ysize),
+ href='#%s' % (self._revision_test_report_name(sample),),
+ title="%s Value: %s" % (sample.revision,sample.min_elapsed)
+ ))
+ del draw
+
+ f = StringIO.StringIO()
+ im.save(f, "GIF")
+ image_src = 'data:image/gif,%s' % (urllib.quote(f.getvalue()),)
+ html_image = pyhtml.img(src=image_src,
+ alt='Benchmark graph of %s' % (self._test_id(
+ sample)),
+ usemap='#%s' % (map_name,))
+ html_map = pyhtml.map(areas, name=map_name)
+ return html_image, html_map
+
+ def _color_for_change(self, delta, max_value=20):
+ """return green for negative change_in_percent and red for
+ positve change_in_percent. If change_in_percent equals 0, then
+ grey is returned.
+
+ The colors range from light green to full saturated green and
+ light red to full saturated red. Full saturation is reached
+ when change_in_percent >= max_value.
+ """
+ #rgb values are between 0 and 255
+ #hsv values are between 0 and 1
+ if len(delta._from) < 3 or len(delta._to) < 3:
+ return '#%02x%02x%02x' % (200,200,200) # grey
+
+ change_in_percent = delta.percent * 100
+ if change_in_percent < 0:
+ basic_color = (0,1,0) # green
+ else:
+ basic_color = (1,0,0) # red
+
+ max_value = 20
+ change = min(abs(change_in_percent), max_value)
+
+ h,s,v = colorsys.rgb_to_hsv(*basic_color)
+ rgb = colorsys.hsv_to_rgb(h, float(change) / max_value, 255)
+ return '#%02x%02x%02x' % rgb
+
+ def _change_report(self, delta):
+ """return a red,green or gray colored html representation of a
+ PerfResultDelta object.
+ """
+
+ fromtimes = [x.elapsed_time for x in delta._from.results]
+ totimes = [x.elapsed_time for x in delta._to.results]
+
+ results = pyhtml.div(
+ "r%d [%s] -> r%d[%s]" %(delta._from.revision,
+ ", ".join(map(str, fromtimes)),
+ delta._to.revision,
+ ", ".join(map(str, totimes)))
+ )
+ return pyhtml.td(
+ pyhtml.div(
+ '%+.1f%% change [%.0f - %.0f = %+.0f ms]' %(
+ delta.percent * 100,
+ delta._to.min_elapsed,
+ delta._from.min_elapsed,
+ delta.delta),
+ style= "background-color: %s" % (
+ self._color_for_change(delta)),
+ ),
+ results,
+ )
+
+ def render_revision_header(self, sample):
+ """return a header for a report with informations about
+ committer, messages, revision date.
+ """
+ revision_id = pyhtml.li('Revision ID: %s' % (sample.revision_id,))
+ revision = pyhtml.li('Revision: %s' % (sample.revision,))
+ date = pyhtml.li('Date: %s' % (sample.revision_date,))
+ logmessage = pyhtml.li('Log Message: %s' % (sample.message,))
+ committer = pyhtml.li('Committer: %s' % (sample.committer,))
+ return pyhtml.ul([date, committer, revision, revision_id, logmessage])
+
+ def render_report(self, deltas):
+ """return a report table with header.
+
+ All deltas must have the same revision_id."""
+ deltas = [d for d in deltas if d.test_id]
+
+ sample = deltas[0]._to.getfastest()
+ report_list = self.render_revision_header(sample)
+
+ table = self.render_table(deltas)
+ return pyhtml.div(
+ pyhtml.a(name=self._revision_report_name(sample)),
+ report_list,
+ table,
+ )
+
+ def render_header(self, start, end):
+ """return the header of the page, sample output:
+
+ benchmarks on bzr.dev
+ from r1231 2006-04-01
+ to r1888 2006-07-01
+ """
+
+ return [
+ pyhtml.div(
+ 'Benchmarks for %s' % (start.nick,),
+ class_="titleline maintitle",
+ ),
+ pyhtml.div(
+ 'from r%s %s' % (
+ start.revision,
+ start.revision_date,
+ ),
+ class_="titleline",
+ ),
+ pyhtml.div(
+ 'to r%s %s' % (
+ end.revision,
+ end.revision_date,
+ ),
+ class_="titleline",
+ ),
+ ]
+
+ def _test_id(self, sample):
+ """helper function, return a short form of a test_id """
+ return '.'.join(sample.test_id.split('.')[-2:])
+
+ def render_table(self, deltas, images=None, anchors=True):
+ """return an html table for deltas and images.
+
+ this function is used to generate the main table and
+ the table of each report"""
+
+ classname = "main"
+ if images is None:
+ classname = "report"
+ images = [None] * len(deltas)
+
+ table = []
+ for delta, image in zip(deltas, images):
+ row = []
+ anchor = ''
+ if anchors:
+ anchor = pyhtml.a(name=self._revision_test_report_name(
+ delta._to.getfastest()))
+ row.append(pyhtml.td(anchor, self._test_id(delta._to.getfastest()),
+ class_='testid'))
+ if image:
+ row.append(pyhtml.td(pyhtml.div(*image)))
+ row.append(self._change_report(delta))
+ table.append(pyhtml.tr(*row))
+ return pyhtml.table(border=1, class_=classname, *table)
+
+
+def main(path_to_perf_history='../.perf_history', path_to_branch='.'):
+ try:
+ perftable = PerfTable(file(path_to_perf_history).readlines(),
+ path_to_branch)
+ except IOError:
+ print 'Cannot find a data file. Please specify one.'
+ sys.exit(-1)
+ page = Page(perftable).render()
+ f = file('benchmark_report.html', 'w')
+ try:
+ f.write(page.unicode(indent=2).encode('latin-1'))
+ finally:
+ f.close()
+
+if __name__ == '__main__':
+ if len(sys.argv) == 1:
+ main()
+ elif len(sys.argv) == 2:
+ main(sys.argv[1])
+ elif len(sys.argv ) == 3:
+ main(*sys.argv[1:3])
+ else:
+ print 'Usage: benchmark_report.py [perf_history [branch]]'
+
=== added file 'tools/pyxml.py'
--- tools/pyxml.py 1970-01-01 00:00:00 +0000
+++ tools/pyxml.py 2006-08-16 20:52:30 +0000
@@ -0,0 +1,220 @@
+"""
+generic (and pythonic :-) xml tag and namespace objects
+used to generate an html object tree and to render it
+into a unicode string afterwards. It is used by
+the internal benchmark_report.py script.
+
+Note that the code below is extract from the MIT
+licensed py library (http://codespeak.net/py)
+and does not deal with parsing xml/html at all.
+"""
+
+import re
+
+class SimpleUnicodeVisitor(object):
+ """ recursive visitor to write unicode. """
+ def __init__(self, write, indent=0, curindent=0, shortempty=True):
+ self.write = write
+ self.cache = {}
+ self.visited = {} # for detection of recursion
+ self.indent = indent
+ self.curindent = curindent
+ self.parents = []
+ self.shortempty = shortempty # short empty tags or not
+
+ def visit(self, node):
+ """ dispatcher on node's class/bases name. """
+ cls = node.__class__
+ try:
+ visitmethod = self.cache[cls]
+ except KeyError:
+ for subclass in cls.__mro__:
+ visitmethod = getattr(self, subclass.__name__, None)
+ if visitmethod is not None:
+ break
+ else:
+ visitmethod = self.object
+ self.cache[cls] = visitmethod
+ visitmethod(node)
+
+ def object(self, obj):
+ #self.write(obj)
+ self.write(unicode(obj))
+
+ def list(self, obj):
+ assert id(obj) not in self.visited
+ self.visited[id(obj)] = 1
+ map(self.visit, obj)
+
+ def Tag(self, tag):
+ assert id(tag) not in self.visited
+ try:
+ tag.parent = self.parents[-1]
+ except IndexError:
+ tag.parent = None
+ self.visited[id(tag)] = 1
+ tagname = getattr(tag, 'xmlname', tag.__class__.__name__)
+ if self.curindent:
+ self.write("\n" + u' ' * self.curindent)
+ if tag:
+ self.curindent += self.indent
+ self.write(u'<%s%s>' % (tagname, self.attributes(tag)))
+ self.parents.append(tag)
+ map(self.visit, tag)
+ self.parents.pop()
+ self.write(u'</%s>' % tagname)
+ self.curindent -= self.indent
+ else:
+ nameattr = tagname+self.attributes(tag)
+ if self._issingleton(tagname):
+ self.write(u'<%s/>' % (nameattr,))
+ else:
+ self.write(u'<%s></%s>' % (nameattr, tagname))
+
+ def attributes(self, tag):
+ # serialize attributes
+ attrlist = dir(tag.attr)
+ attrlist.sort()
+ l = []
+ for name in attrlist:
+ res = self.repr_attribute(tag.attr, name)
+ if res is not None:
+ l.append(res)
+ l.extend(self.getstyle(tag))
+ return u"".join(l)
+
+ def repr_attribute(self, attrs, name):
+ if name[:2] != '__':
+ value = getattr(attrs, name)
+ if name.endswith('_'):
+ name = name[:-1]
+ return u' %s="%s"' % (name, escape(unicode(value)))
+
+ def getstyle(self, tag):
+ """ return attribute list suitable for styling. """
+ try:
+ styledict = tag.style.__dict__
+ except AttributeError:
+ return []
+ else:
+ stylelist = [x+': ' + y for x,y in styledict.items()]
+ return [u' style="%s"' % u'; '.join(stylelist)]
+
+ def _issingleton(self, tagname):
+ """can (and will) be overridden in subclasses"""
+ return self.shortempty
+
+
+class Tag(list):
+ class Attr(object):
+ def __init__(self, **kwargs):
+ self.__dict__.update(kwargs)
+
+ def __init__(self, *args, **kwargs):
+ super(Tag, self).__init__(args)
+ self.attr = self.Attr(**kwargs)
+
+ def __unicode__(self):
+ return self.unicode(indent=0)
+
+ def unicode(self, indent=2):
+ from py.__.xmlobj.visit import SimpleUnicodeVisitor
+ l = []
+ SimpleUnicodeVisitor(l.append, indent).visit(self)
+ return u"".join(l)
+
+ def __repr__(self):
+ name = self.__class__.__name__
+ return "<%r tag object %d>" % (name, id(self))
+
+
+# the generic xml namespace
+# provides Tag classes on the fly optionally checking for
+# a tagspecification
+
+class NamespaceMetaclass(type):
+ def __getattr__(self, name):
+ if name[:1] == '_':
+ raise AttributeError(name)
+ if self == Namespace:
+ raise ValueError("Namespace class is abstract")
+ tagspec = self.__tagspec__
+ if tagspec is not None and name not in tagspec:
+ raise AttributeError(name)
+ classattr = {}
+ if self.__stickyname__:
+ classattr['xmlname'] = name
+ cls = type(name, (self.__tagclass__,), classattr)
+ setattr(self, name, cls)
+ return cls
+
+class Namespace(object):
+ __tagspec__ = None
+ __tagclass__ = Tag
+ __metaclass__ = NamespaceMetaclass
+ __stickyname__ = False
+
+class _escape:
+ def __init__(self):
+ self.escape = {
+ u'"' : u'"', u'<' : u'<', u'>' : u'>',
+ u'&' : u'&', u"'" : u''',
+ }
+ self.charef_rex = re.compile(u"|".join(self.escape.keys()))
+
+ def _replacer(self, match):
+ return self.escape[match.group(0)]
+
+ def __call__(self, ustring):
+ """ xml-escape the given unicode string. """
+ return self.charef_rex.sub(self._replacer, ustring)
+
+escape = _escape()
+
+class HtmlVisitor(SimpleUnicodeVisitor):
+
+ single = dict([(x, 1) for x in
+ ('br,img,area,param,col,hr,meta,link,base,'
+ 'input,frame').split(',')])
+
+ def repr_attribute(self, attrs, name):
+ if name == 'class_':
+ value = getattr(attrs, name)
+ if value is None:
+ return
+ return super(HtmlVisitor, self).repr_attribute(attrs, name)
+
+ def _issingleton(self, tagname):
+ return tagname in self.single
+
+
+class HtmlTag(Tag):
+ def unicode(self, indent=2):
+ l = []
+ HtmlVisitor(l.append, indent, shortempty=False).visit(self)
+ return u"".join(l)
+
+
+# exported plain html namespace
+class html(Namespace):
+ __tagclass__ = HtmlTag
+ __stickyname__ = True
+ __tagspec__ = dict([(x,1) for x in (
+ 'a,abbr,acronym,address,applet,area,b,bdo,big,blink,'
+ 'blockquote,body,br,button,caption,center,cite,code,col,'
+ 'colgroup,comment,dd,del,dfn,dir,div,dl,dt,em,embed,'
+ 'fieldset,font,form,frameset,h1,h2,h3,h4,h5,h6,head,html,'
+ 'i,iframe,img,input,ins,kbd,label,legend,li,link,listing,'
+ 'map,marquee,menu,meta,multicol,nobr,noembed,noframes,'
+ 'noscript,object,ol,optgroup,option,p,pre,q,s,script,'
+ 'select,small,span,strike,strong,style,sub,sup,table,'
+ 'tbody,td,textarea,tfoot,th,thead,title,tr,tt,u,ul,xmp,'
+ 'base,basefont,frame,hr,isindex,param,samp,var'
+ ).split(',') if x])
+
+ class Style(object):
+ def __init__(self, **kw):
+ for x, y in kw.items():
+ x = x.replace('_', '-')
+ setattr(self, x, y)
+
=== modified file 'bzrlib/testament.py'
--- bzrlib/testament.py 2006-08-15 07:42:50 +0000
+++ bzrlib/testament.py 2006-08-12 18:46:30 +0000
@@ -183,7 +183,9 @@
assert not contains_whitespace(name)
r.append(' %s:\n' % name)
for line in value.splitlines():
- r.append(u' %s\n' % line)
+ if not isinstance(line, str):
+ line = line.encode('utf-8')
+ r.append(' %s\n' % line)
return r
def as_sha1(self):
=== modified file 'bzrlib/tests/__init__.py'
--- bzrlib/tests/__init__.py 2006-08-16 05:24:23 +0000
+++ bzrlib/tests/__init__.py 2006-08-16 20:40:20 +0000
@@ -1372,6 +1372,7 @@
'bzrlib.tests.test_whitebox',
'bzrlib.tests.test_workingtree',
'bzrlib.tests.test_xml',
+ 'bzrlib.tests.test_benchmark_report',
]
test_transport_implementations = [
'bzrlib.tests.test_transport_implementations',
More information about the bazaar
mailing list