Keylogger

Joel Rees joel.rees at gmail.com
Mon Dec 4 00:28:32 UTC 2017


2017/12/04 8:53 "Colin Watson" <cjwatson at ubuntu.com>:
>
> On Sun, Dec 03, 2017 at 08:21:41AM +0100, Xen wrote:
> > I was installing Debian the other day on some system. The "secure erase"
> > option probably used shred in its default state.
>
> No, that's not so; it uses
>
https://anonscm.debian.org/git/d-i/partman-crypto.git/tree/blockdev-wipe/blockdev-wipe.c
,
> called from
>
https://anonscm.debian.org/git/d-i/partman-crypto.git/tree/lib/crypto-base.sh#n284
.
> It's basically just intended to be dd except with a progress indicator
> that can be hooked into the Debian installer's frontend.  There's no
> intent here to do a particularly paranoid shred operation.
>
> Somebody had a go at speeding it up a few years ago
> (https://bugs.debian.org/722898, included in jessie and later releases).
> If it's still significantly slower than dd even after that, then I think
> it's worth somebody's time to do a bit of experimentation to investigate
> why that's the case, as it's probably just a bug somewhere.
>

Consider what you are doing when you run

dd if=/dev/random of=whatever

with appropriate block options, etc. Think about the speed of your bus, the
limits of speed on cables, the amount of data you are pumping, etc.

If that doesn't clarify things, investigate the cost of system entropy.

Properly wiping a drive is just going to be slow. You can't wipe a terabyte
over with a simple dump of all bits zero in a second. That and a second
pass of all bits 1 ought to be good enough if you aren't involved in
running arms or something like that. But it takes a finite amount of time,
if it is actually being done.

But back around 2012, I think, there were flash devices that used
compression to make 4G of actual storage look like 8G, depending on the
statistical hope that the average purchaser would only be saving a few
hundred megabytes before losing the thing somewhere anyway.

With those, a dump of 8 Gbytes all bits zero or all bits 1 would not even
begin to fill the thing. Two passes with the output of the old C library
rand() call would be good enough to work around that, but a physical
analysis of the device would reveal a very large repeating pattern. Once
the pattern is known, it's easier to do the deeper analyis.

... if you have data that is worth that much to someone.

Most of us don't have data that is worth that much. That's what saves us.

But if you need to wipe and reuse a drive, you probably should do it right.
You should understand enough of what you're working against to be able to
use dd and custom data sources to do it yourself. That way you can avoid
having to trust tools like those mentioned, that may or may not
successfully do what you want them to.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/ubuntu-users/attachments/20171204/0efd61b9/attachment.html>


More information about the ubuntu-users mailing list