Proposal for solving CD Size problems

Mgr. Peter Tuharsky tuharsky at misbb.sk
Fri Sep 29 08:17:26 BST 2006


Well,

it has been said that Knoppix uses the gzip already and has been using 
lzma before. How did the guys make it work, if it is such a bad deal as 
many post here?

Let's learn from them and don't reinvent the wheel, I suggest.

Peter


Phillip Lougher  wrote / napísal(a):
> On 9/28/06, Phillip Lougher <phillip.lougher at gmail.com> wrote:
> 
>> 1. Block to be decompressed is on disk only.  Overhead to get
>> decompressed data is seek-time + block I/O + decompression, and this
>> can't be done in parallel.  Five times slower decompression even in
>> this case is no loss only if the ratio of seek+I/O time to
>> decompression time makes decompression overhead negligible, i.e 5
>> times nothing = nothing.  Taking some measurements I did a couple of
>> years ago (http://tree.celinuxforum.org/CelfPubWiki/SquashFsComparisons),
>> for example reading a  squashfs re-encoded Ubuntu livecd from CDROM
>> took 5 minutes 15.46 seconds with System time of 51.12 seconds, i.e.
>> 16% of the time was decompression overhead.  Five times this is
>> certainly no loss, figuring this into the stats would make something
>> like total overhead 8 minutes 39 seconds with decompression time of 4
>> minutes 15 seconds, or 2 times slower.  Reading from hard disk where
>> seek-time and block I/O is a smaller percentage of overhead, makes the
>> performance loss even worse.
> 
> 8 minutes 39 seconds as opposed to 5 minutes 15.46 seconds == 1.65
> times slower overall.  This is sequential access, real-life random
> access will be worse as data will need to be re-decompressed from the
> block cache, hitting 5 times slower performance (item 2).
> 
> Phillip
> 






More information about the ubuntu-devel mailing list