regular fsck runs are too disturbing - and current approach does not work very well in detecting defects!

Vincenzo Ciancia ciancia at di.unipi.it
Mon Oct 1 10:03:18 UTC 2007


On 01/10/2007 Waldemar Kornewald wrote:
> Did you ever use WinXP and run chkdsk from the command line? It warns
> you that it can't *correct* errors (a reboot is needed if errors are
> found), but it can at least *detect* errors on a mounted and active
> partition (even the boot partition, in case you wondered). Why should
> Linux not be able to copy this behavior?

I still am convinced that fsck is _not_ the right tool for the purpose.
Ext3 already has a journal that should (hopefully) avoid file system
corruption due power failures. What is the point in running fsck
periodically? If it's to check for disk errors, then badblocks is the
right tool and it can run read-only on a mounted filesystem. Moreover,
if the point is to check periodically, then we could check a small
amount of blocks at a time,using low disk priority like search daemons
(should) do, or even check random blocks.

Finally, I want to point out to those that say fsck defends your data: I
have a desktop machine which hosts an internal service, so it's
continuously up. I once rebooted, disk was damaged, and I couldn't no
longer boot or recover data (I had a backup, in any case, but it's not
so typical with desktop users). However, it had an uptime of months. If
I had an online check (e.g. read-only fsck, or smart, or badblocks) I
would have discovered the problem before, and would have been able to
recover some data. I know this by long experience, so don't tell me it's
not likely.

In my opinion, a blueprint should be written about checking _blocks_ of
disks while running the os, in such a way that user work is unaffected
at all, by modifying the badblocks command.

Vincenzo











More information about the Ubuntu-devel-discuss mailing list