lossless compression of still images - recommendations?

Jonesy SPAM_TRAP_gmane at jonz.net
Sun Feb 19 16:28:16 UTC 2017


On Sun, 19 Feb 2017 09:10:59 +0000, Colin Law wrote:
> On 18 February 2017 at 22:43, Karl Auer <kauer at biplane.com.au> wrote:
>> On Sat, 2017-02-18 at 21:11 +0000, Colin Law wrote:
>>> On 17 February 2017 at 21:11, Robert Heller <heller at deepsoft.com>
>>> wrote:
>>> >
>>> > Use tar+bzip2.
>>> I thought that the OP meant that he wanted to compress the series of
>>> images, making use of the fact that they are little changed from one
>>> image to the next. Much like video is stored.
>>
>> Because the images are very alike, they should compress exceptionally
>> well. That's kinda how modern compression works; it looks for recurring
>> sequences in the data and replaces them with a index numbers
>> representing those sequences. It stores the sequence once, and the
>> index many times. Multipass compression looks for metasequences and
>> does the same with them.
>>
>> I think the OP will get better compression out of tarring them up
>> first, then compressing them, because that lets the compression program
>> look for sequences across the whole set, not just within one image.
>> However, it would be worth trying both (compress first and tar first).
>> It would also be worth trying a few different compression programs.
>
> Yes of course you are right, I had not considered the fact that
> compressing the tarred fileset will detect the repeated patterns
> across the set. I was thinking that the intention was to compress
> separately, which would be less likely to find the repeats, I presume.

Ya, but.  I wonder if the "unchanging" regions in the various images 
result in identical JPEG encoding.  It would only take the slightest 
variation in light intensity to render regions totally different -- 
especially if very high resolution images are sought.

However, it _is_ something to carry out an experiment on!

Jonesy





More information about the ubuntu-users mailing list