New box, memory problem

Liam Proven lproven at gmail.com
Sun Oct 23 14:19:26 UTC 2016


On 23 October 2016 at 01:22, rikona <rikona at sonic.net> wrote:
> Even for the WORST tested SSD, you would have to write 204 GB
> of data every day for 10 years


This is the sort of thing that led to sayings like D'Israeli's "there
are three kinds of lies: lies, damned lies, and statistics".

Yes it's true. Statistically. The drive might not experience global
failures for that long.

And arbitrary data can be spread out.

*But*, and it's a big but, you don't need to care about 99.99% of the
disk, if you have free space. All you need to care about is, for
instance, the blocks holding the main root directory. If they fail
unexpectedly, then the whole drive becomes inaccessible and you lose
everything.

The first few blocks of the disk see more use than all the rest,
because they hold the master inodes. The main indices for all the
disk.

When a hard disk fails, you get some unreadable bits - but you can
copy the rest.

When an SSD fails, it stops being readable at all. A write fails to
block 0, it becomes invalid, next boot the OS can't find the journal
to roll back the change because there is no visible OS any more, and
bang, it's all gone.

This is dramatic. It's a worst-case failure. It's very unlikely.

But it can happen and since 1988 my job has been fixing worst-case
failures. There are a billion odd personal computers out there now,
plus servers.

So, as Terry Pratchett said,

“Scientists have calculated that the chances of something so patently
absurd actually existing are millions to one.
But magicians have calculated that million-to-one chances crop up nine
times out of ten.”


-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lproven at cix.co.uk • GMail/Twitter/Facebook/Flickr: lproven
Skype/MSN: lproven at hotmail.com • LinkedIn/AIM/Yahoo: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)




More information about the ubuntu-users mailing list