hard disk integrity check
nine.socks at gmail.com
Tue Apr 18 13:05:22 UTC 2006
On 4/17/06, Alan McKinnon <alan at linuxholdings.co.za> wrote:
> On Monday 17 April 2006 00:22, Yorvik wrote:
> > Gary W. Swearingen wrote:
> > > I'll propagate a rumor I heard: Use of "badblocks" has been
> > > obsolete since drives started to remap their own bad blocks many
> > > years ago. You won't find bad blocks until after your drive has
> > > found so many that it's better used as a doorstop.
> > >
> > > I beg to be corrected by someone with real knowledge.
> > I was told a couple of years back, that if you can find bad blocks
> > with 'normal user software' the drive has had it and may as well be
> > chucked.
> That's not true. There's many urban myths surrounding disks, and this
> is one of them.
> > Personally, I can remember when harddisks had labels on them
> > listing the bad blocks.
> Those were MFM drives. Drive manufacturers soon got fed up with the
> support calls from users as to why they shipped faulty disks - ALL
> drives have errors. Solution - disguise the errors. Modern drives
> keep a small percentage of space free for bad blocks and as the
> firmware picks up failing sectors, it remaps their location into this
> unused space. The whole cyclinder/sector scheme is a pure abstraction
> anyway, so this works well.
> When you run badblocks and pick up a few errors, all you are doing is
> beating the drive firmware to the same job. When you find it with
> user software, it means that the firmware has given up trying.
> Reading and writing to a disk is a tricky job (almost but not quite
> entirely unlike reading and writing to RAM), and the firmware is
> limited in what it can get the heads to do. The end result is that it
> gives up easily, which opens up a nice market for third party
> software. This is most of what disk data recovery is about. Spinrite
> is a good example and http://www.grc.com explains it all nicely as
> long as you can read past Steve Gibson's hyped opinions.
> What you should be worried about is the *rate* at which bad sectors
> are being produced. Once this passes a certain well-defined amount,
> then the drive is becoming statistically more likely to fail. This is
> what SMART is all about.
> So while what you were told is not completely wrong, it's over
> simplified by a very big margin
so does this mean I shouldn't bother in testing my hard disks for its
already taken care of?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ubuntu-users