speeding up hard drive wipe

Joel Rees joel.rees at gmail.com
Sat Sep 26 23:19:00 UTC 2020


On Sun, Sep 27, 2020 at 6:16 AM Grizzly via ubuntu-users
<ubuntu-users at lists.ubuntu.com> wrote:
>
> 26 September 2020  at 21:11, Colin Law wrote:
> Re: speeding up hard drive wipe (at least in part)
>
> >On Sat, 26 Sep 2020 at 21:00, Grizzly via ubuntu-users
> ><ubuntu-users at lists.ubuntu.com> wrote:
> >>
> >> Mutilate File Wiper v2.97 Build 2
> >
> >It appears that their website has been suspended. http://mutilatefilewiper.com
>
> It's been a while since I needed to contact Craig (the auther) or worry about
> updates, it just works, I suspect that it's more the way Windows OS treats
> disks that allows the recovery to find "something" as I did say I don't have
> recovery tools for Ubuntu, I think one of my other "Bootable" wipe tools (only
> for whole disks) is based a Linux flavour, and IIRC runs at a fair speed (750gb
> in an hour or so), I've not tested it on Tb and larger drives

It's my understanding that MSWindows file allocation and data space
reallocation policy, together with the tight coupling that Microsoft
contracts with the various hardware manufacturers (secret agreements
that would run afoul with fair trade law) generally means that every
save involves moving the old file to the freed pool and reallocating
the file in a space newly acquired from the free pool in a least
recently deallocated method.

(The defragmenting tool also knows what's going on, and is doing
something other than simple defragmentation while it "optimizes" the
file system (file layout).")

My experience with the tools mentioned elsewhere in this thread, and
with simply grepping through raw reads on an unmounted partition from
a boot of openBSD and such, indicates that my understanding isn't far
from the truth.

And, as mentioned elsewhere, even with Linux, the drive controller
itself is wear-balancing the data.

And *nix OSses have also from time immemorial had a tendency to let
files move on re-allocation or on open-for-overwrite, etc.

That's why, as someone mentioned, you encrypt partitions that will
contain sensitive data. (And remember that the encryption tools
themselves are likely pwned by the likes of the NSA, which means that
there are trapdoor paths that organized crime can also discover.)

And, when you decommission a drive that has contained sensitive data,
you start with overwriting the physical partition with arbitrary data.

But systems with hardened software /dev/random often can't sustain the
data rates from the entropy pool that you want when you use that as
the source of your arbitrary date, BTW. If /dev/random works for your
overwrite, it may not be very cryptographically random. And commercial
hardware sources of random data are also potentially subject to secret
agreements. Which means you may end up writing your own tools for
generating lots of arbitrary data quickly from whatever sources of
entropy you have.

You really can't trust anything you didn't build yourself.

(And, if you are aware of your own limitations, you can't really trust
what you built, yourself, either.)

So, it's good to be aware of the possibilities, but you ultimately
have to make tradeoffs. It's engineering, so tradeoffs should be
expected.

-- 
Joel Rees

http://reiisi.blogspot.jp/p/novels-i-am-writing.html




More information about the ubuntu-users mailing list