Maverick on Weybridge + WD2.5THDD

Colin Ian King colin.king at canonical.com
Tue Oct 12 12:04:28 UTC 2010


Manoj,

I've put my hacky 4K sector test program up in my debug-code repo:

http://kernel.ubuntu.com/git?p=cking/debug-code/.git;a=blob;f=io-4k-sector-test/iotest.c;h=1df01df00f60211bd07527e724e2c901c3fe32a6;hb=4aff5c0e7aedfa4a5b33bb76e863a1c3a7c5312c

I've already basic tests using dd and sum over the entire HDD 3 times
and found zero errors, but the above test program is more exhaustive as
it writes magic to each sector and checks this later.  I've tweaked the
above code to read the last 2TB and it works OK so addressing the last
0.5TB of sectors is working fine. Now I'm soak testing to do the whole
HDD a few times over the next 48 hours.

Colin

On Fri, 2010-10-08 at 10:29 -0500, manoj.iyer at canonical.com wrote:
> I use the tool called stress http://goo.gl/uV23, my idea is that if I can 
> create enough threads and make them write/delete 1GB or more files 
> repeatedly it is possible that data will get written along the entire 
> length of the disk. I approx calculated that 32768 (thread max on my 
> machine is 31184) threads each read/write 1GB+ data might do the trick, 
> the whole test could take a few hours. I could modify the program to 
> report where in the disk the data is being written and verify that the 
> entire LBA range is exercised, but I have not got around to doing that. I 
> ran the tool as is last night, and it completed with out any failures.
> 
> Cheers
> --- manjo
> 
> On Fri, 8 Oct 2010, Bob Griswold wrote:
> 
> > Albert / Jeff:
> >
> > Given that Colin, Manjo, Harry and potentially others on this list are
> > interested in exercising the entire reported LBA range under Ubuntu, can you
> > provide thoughts (when you get the chance) on tools we use in Windows for
> > that work that may help them reduce the time needed.  I may be naïve from my
> > past experiences in doing full volume passes with RAID targets, but Colin's
> > predictions on the amount of time needed to pass 3x write/read test to a 2.5
> > TB HDD seems way too long on the SATA bus.  iSCSI or CIFS, maybe, but 3.0 Gb
> > SATA should smoke that.  Am I missing something?
> >
> > Clearly, as you time(s) permit, I know you're busy.
> >
> > Bob
> >
> > -----Original Message-----
> > From: Colin Ian King [mailto:colin.king at canonical.com]
> > Sent: Friday, October 08, 2010 1:42 AM
> > To: Bob Griswold
> > Cc: manoj.iyer at canonical.com; Hsiung, Harry L; Ubuntu Kernel Team
> > Subject: RE: Maverick on Weybridge + WD2.5THDD
> >
> > I'll put the HDD into a soak test at the raw device level next week to
> > exercise every sector. I reckon it will take ~30 hours to fill the device,
> > so I will run this for a week to get 2-3 write/read iterations done.
> >
> > Colin
> >
> > On Thu, 2010-10-07 at 16:07 -0700, Bob Griswold wrote:
> >> Hey, they don't fry.  :(
> >>
> >> Remember, SATA-class HDD running massive concurrent IO nonstop is not
> >> the environment or market it's sold into.  For that, you'd need a
> >> moderately more expensive Enterprise-class SATA HDD.  Please follow me
> >> to the display case, in the back...
> >>
> >> Bob
> >>
> >> -----Original Message-----
> >> From: manoj.iyer at canonical.com [mailto:manoj.iyer at canonical.com]
> >> Sent: Thursday, October 07, 2010 3:51 PM
> >> To: Hsiung, Harry L
> >> Cc: Manoj Iyer; Bob Griswold; Ubuntu Kernel Team
> >> Subject: RE: Maverick on Weybridge + WD2.5THDD
> >>
> >>
> >> Harry,
> >>
> >> I started a stress test this afternoon and it is still going, the test
> >> spawns 32768 threads, which is little over 1/2 max threads, and each
> >> thread write/remove 1GB files (in a loop). Hopefully some of these
> >> threads will write close to the end of the disk as well. The HDD is
> >> warm to the touch, hopefully I wont fry the WD drive ;)
> >>
> >>
> >> Cheers
> >> --- manjo
> >>
> >> On Thu, 7 Oct 2010, Hsiung, Harry L wrote:
> >>
> >>> I had done an installation on maverick meerkat daily build amd64.iso
> >>> from
> >> 9/10/10 for IDF (sept 14th). I did not have any problems and could see
> >> all of the disk. The official Meerkat build (before sept 11th) appears
> >> to have UEFI install missing.
> >>>
> >>> If you have any disk utilities to test the filesystem (all 2.5tb or
> >>> 3tb)
> >> would like to know if the file system is really functional all they
> >> way out to the end of the disk.
> >>>
> >>> Could do the brute force thing of copying files until I fill the
> >>> disk up
> >> but it is really time consuming. Checking to see if the filesystem is
> >> corrupt is still a question in my mind (does fsck check this for disks
> >> and file systems >2.2 tb)?
> >>>
> >>> Harry Hsiung (熊海霖)
> >>> Intel Corp.
> >>> SSG PSI Tiano/EFI TME
> >>> Dupont WA DP2-420
> >>> office 253-371-5381
> >>> cell 360-870-2141
> >>>
> >>>
> >>> -----Original Message-----
> >>> From: Manoj Iyer [mailto:manoj.iyer at canonical.com]
> >>> Sent: Thursday, October 07, 2010 11:31 AM
> >>> To: Bob Griswold; Hsiung, Harry L
> >>> Cc: Ubuntu Kernel Team
> >>> Subject: Maverick on Weybridge + WD2.5THDD
> >>>
> >>>
> >>> Harry/Bob,
> >>>
> >>> I was able to install Maverick on the Weybridge with the 2.5T HDD
> >>> shipped to me from WD by Bob. Installed from CD in UEFI mode.
> >>>
> >>>
> >>> Disk /dev/sda1: 18 MB, 18874368 bytes
> >>> 255 heads, 63 sectors/track, 2 cylinders Units = cylinders of 16065
> >>> *
> >>> 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 4096
> >>> bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk
> >>> identifier: 0x00000000
> >>>
> >>>      Device Boot      Start         End      Blocks   Id  System
> >>>
> >>>
> >>> Disk /dev/sda2: 2494.4 GB, 2494448009216 bytes
> >>> 255 heads, 63 sectors/track, 303266 cylinders Units = cylinders of
> >>> 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512
> >>> bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096
> >>> bytes Disk
> >>> identifier: 0x00000000
> >>>
> >>> Looks ok to me at first glance, if you guys notice anything abnormal
> >>> please share it with me.
> >>>
> >>>
> >>> Cheers
> >>> --- manjo
> >>>
> >>
> >
> >
> >






More information about the kernel-team mailing list