data shredder

Amedee Van Gasse (ub) amedee-ubuntu at amedee.be
Mon Dec 21 22:16:57 UTC 2009


On Mon, December 21, 2009 17:22, Ray Leventhal wrote:
> Gilles Gravier wrote:
>> Hi!
>>
>> On 21/12/2009 09:55, Amedee Van Gasse (ub) wrote:
>>> On Mon, December 21, 2009 04:28, jesse stephen wrote:
>>>
>>>> I'm looking for a data shredder for ubuntu 9.10
>>>>
>>> The other suggestions are good, and if you want a low-tech solution:
>>>
>>> 1) delete your files with rm as usual
>>> 2) overwrite the empty disk space with zeroes or random data
>>> Use either one of these commands:
>>>
>>> dd if=/dev/null of=nullfile bs=1M
>>> dd if=/dev/random of=randomfile bs=1M
>>>
>>> They will create a file called 'nullfile' or 'randomfile', filling all
>>> the
>>> empty space on your disk. The dd command will automatically abort when
>>> all
>>> free disk space is used.
>>> Please note that this can take a *long* time, depending on the size of
>>> your free disk space. Also /dev/random is a special device that
>>> generates
>>> "entropy" (=random data) and with this method you use up all the
>>> available
>>> entropy so sometimes it will stall until it has created enough new
>>> entropy.
>>>
>>> When it's done, rm nullfile or em randomfile.
>>> If you're really paranoid, repeat the procedure a couple of times.
>>>
>>>
>>>
>> The problem with these commands, is that you're not really helping...
>> Forensics tools will read below one or more levels of re-write. You need
>> to do this several times in a row... and, more importantly, you need to
>> use special data patterns that will actually make reading shadows of
>> former data harder if not impossible. There are standards for that. And
>> they do not involve writing random data or zeros, but actual specific
>> patterns.
>>
>> Gilles.
>>
> Sorry to come in late to this, but no.
>
> And...my apologies for going OT as the OP didn't ask for a diatribe :)
>
> According to NIST (the US's National Institute of Standards and
> Technology) in their publication SP 800-88, 2 types of overwrite
> standards are defined: 'clear' and 'purge'
>
> 'Clear' calls for the systematic overwriting of every addressable sector
> of a drive and is sufficient for eradication, bypassing most labs'
> ability to recover data, even data recovery companies (I work for one).
>
> 'Purge' calls for either 1) calling upon the firmware of the drive to
> carry on the eradication by overwriting (security erase is one example),
> or by physically shredding the hard drive into pieces of a defined size
> (I cannot remember the size and don't have the spec in front of me).
>
> Both 'clear' and 'purge' are single pass overwrite paradigms...and both
> are sufficient to eradicate data.
>
> The overwriting 3 pass former standard (referred to as DoD 5225.22M) is
> deprecated, but when it was the 'way to go', it called for 3 passes...a
> pattern, its compliment, then random data.
>
> The concept of digging into 'layers' of data on a magnetic spinning disk
> is, in today's drives and technology, untrue.  As part of the data
> eradication program we endorse where I work, a single pass of 'purge' or
> 'clear' satisfies all US standards including GLBA, HIPAA and SoX.
>
> As this is wholly OT at this point, I'll be happy to reply privately if
> there are any comments or questions.


I'm glad that someone who works in a data recovery company agrees with me. :)
I'm not ignorant on the subject, because it was one of the two subjects I
was going to do my thesis on. (eventually I'm doing the other subject,
about spam)





More information about the ubuntu-users mailing list