[ubuntu-users] Changing 232.9 NTFS hd to EXT3

Ted Hilts thilts at mcsnet.ca
Sun Mar 22 02:50:46 UTC 2009


Ray Parrish wrote:
> Ted Hilts wrote:
>   
>> Ray Parrish wrote:
>>   
>>     
>>> Ted Hilts wrote:
>>>   
>>>     
>>>       
>>>> This is a resend as the original email has not shown up on the list.
>>>>
>>>> I want to know the optimal solution.
>>>> The hard drive (HD) is 232.9 GB.
>>>> The application using the HD is the storage of web pages.
>>>> The HD is currently mounted as NTFS and there is no data on it that I 
>>>> want..
>>>> Ubuntu is installed in a dual boot grub configuration.with XP HOME.
>>>> While Ubuntu is booted I want to format this drive.
>>>> Eventually all but one of the 6 current NTFS formatted hard drives will 
>>>> be changed to EXT3.
>>>>
>>>> The following is what I think is the correct use of options to be 
>>>> applied after the 232.9 GB HD has been dismounted by Ubuntu with the 
>>>> command umount "/media/sdc1"
>>>>
>>>> sudo /sbin/mkfs.ext3 -c -i 1024 -b 1024 -L HDA1 -v /dev/hda1
>>>>
>>>> and then mount the HD.  Also, is there anything I have missed?
>>>>
>>>> I think the smallest size for blocks is now 1024 but at one time used to 
>>>> be 512.
>>>>
>>>> BELOW is the man page synopsis:
>>>>
>>>> SYNOPSIS
>>>>       mke2fs  [  -c  |  -l  filename ] [ -b block-size ] [ -f 
>>>> fragment-size ] [ -g blocks-per-group ] [ -i
>>>>       bytes-per-inode ] [ -I inode-size ] [ -j ] [ -J journal-options ] 
>>>> [ -N number-of-inodes ] [ -n  ]  [
>>>>       -m  reserved-blocks-percentage  ]  [  -o creator-os ] [ -O 
>>>> feature[,...]  ] [ -q ] [ -r fs-revision-
>>>>       level ] [ -E extended-options ] [ -v ] [ -F ] [ -L volume-label ] 
>>>> [ -M last-mounted-directory ] [ -S
>>>>       ] [ -T filesystem-type ] [ -V ] device [ blocks-count ]
>>>>
>>>>       mke2fs  -O journal_dev [ -b block-size ] [ -L volume-label ] [ -n 
>>>> ] [ -q ] [ -v ] external-journal [
>>>>       blocks-count ]
>>>>
>>>> DESCRIPTION
>>>>       mke2fs is used to create an ext2/ext3 filesystem (usually in a 
>>>> disk partition).  device is the  spe?
>>>>       cial  file corresponding to the device (e.g /dev/hdXX).  
>>>> blocks-count is the number of blocks on the
>>>>       device.  If omitted, mke2fs automagically figures the file system 
>>>> size.  If called  as  mkfs.ext3  a
>>>>       journal is created as if the -j option was specified.
>>>>
>>>>
>>>> Thanks for any input -- Ted
>>>>   
>>>>     
>>>>       
>>>>         
>>> Hello,
>>>
>>> The only thing I'm seeing so far, is that you are using hda1 when the 
>>> disk was mounted at /media/sdc1, and those don't point at the same kind 
>>> of disk. The use of hda1 is for IDE disks, and would have pointed at the 
>>> first partition on disk one of the IDE interface, while your sdc1 points 
>>> to an scsi drive at position 3, partition 1, I believe. Use "df -h" to 
>>> be certain of the drive designations you should use.
>>>
>>> Your -i 1024 is too large of a number for inode size as is explained in 
>>> the following copy from -
>>>
>>> <file:///usr/share/doc/HOWTO/en-html/Large-Disk-HOWTO-14.html>
>>>
>>> [begin quote] "fdisk will tell you how many blocks there are on the 
>>> disk. If you make a file system on the disk, say with mke2fs, then this 
>>> filesystem needs some space for bookkeeping - typically something like 
>>> 4% of the file system size, more if you ask for a lot of inodes during 
>>> mke2fs. For example:
>>>
>>>     ||
>>>
>>>     # sfdisk -s /dev/hda9
>>>     4095976
>>>     # mke2fs -i 1024 /dev/hda9
>>>     mke2fs 1.12, 9-Jul-98 for EXT2 FS 0.5b, 95/08/09
>>>     ...
>>>     204798 blocks (5.00%) reserved for the super user
>>>     ...
>>>     # mount /dev/hda9 /somewhere
>>>     # df /somewhere
>>>     Filesystem         1024-blocks  Used Available Capacity Mounted on
>>>     /dev/hda9            3574475      13  3369664      0%   /mnt
>>>     # df -i /somewhere
>>>     Filesystem           Inodes   IUsed   IFree  %IUsed Mounted on
>>>     /dev/hda9            4096000      11 4095989     0%  /mnt
>>>     #
>>>       
>>>
>>> We have a partition with 4095976 blocks, make an ext2 filesystem on it, 
>>> mount it somewhere and find that it only has 3574475 blocks - 521501 
>>> blocks (12%) was lost to inodes and other bookkeeping. Note that the 
>>> difference between the total 3574475 and the 3369664 available to the 
>>> user are the 13 blocks in use plus the 204798 blocks reserved for root. 
>>> This latter number can be changed by tune2fs. This `-i 1024' is only 
>>> reasonable for news spools and the like, with lots and lots of small 
>>> files. The default would be:
>>>
>>>     ||
>>>
>>>     # mke2fs /dev/hda9
>>>     # mount /dev/hda9 /somewhere
>>>     # df /somewhere
>>>     Filesystem         1024-blocks  Used Available Capacity Mounted on
>>>     /dev/hda9            3958475      13  3753664      0%   /mnt
>>>     # df -i /somewhere
>>>     Filesystem           Inodes   IUsed   IFree  %IUsed Mounted on
>>>     /dev/hda9            1024000      11 1023989     0%  /mnt
>>>     #
>>>       
>>>
>>> Now only 137501 blocks (3.3%) are used for inodes, so that we have 384 
>>> MB more than before. (Apparently, each inode takes 128 bytes.) On the 
>>> other hand, this filesystem can have at most 1024000 files (more than 
>>> enough), against 4096000 (too much) earlier." [end quote]
>>>
>>> NOTE: The man page for make.ext2 states that the default inode size is 
>>> 256 so the 128 quoted in the above article excerpt seems to be currently 
>>> incorrect. here's a quote from the man page for mkfs.ext2
>>>
>>> [begin quote]
>>>  -I inode-size
>>>               Specify the  size  of  each  inode  in  bytes. mke2fs 
>>> creates 256-byte inodes  by default. In kernels after 2.6.10 and some 
>>> earlier vendor kernels it is possible to utilize  inodes  larger than 
>>> 128-bytes to store extended attributes for improved performance. The 
>>> inode-size value must be a power of  two  larger or equal to 128. The  
>>> larger the inode-size the more space the inode table will consume, and 
>>> this reduces the usable space in the filesystem and can also negatively 
>>> impact performance. Extended attributes stored in large inodes are not 
>>> visible  with older kernels, and such filesystems will not be mountable 
>>> with 2.4 kernels at all. [end quote]
>>>
>>> So, it appears to be a trade off between usable file space, number of 
>>> possible files, and performance when specifying the inode sizes, with 
>>> the inability to use the file system with earlier kernels at all, when 
>>> specifying larger inode sizes. This may not be a problem for you, as I 
>>> see that the kernels for Hardy right now are in the 2.6 range, so are 
>>> not affected by this consideration. If you want to use the smaller 128 
>>> size, you will need to specify it with -i, otherwise it seems 
>>> appropriate to use the default 256 size.
>>>
>>> I also have not seen any reference to a need to unmount the drive before 
>>> formatting it anywhere.
>>>
>>> That's what I could find out.
>>>
>>> Later, Ray Parrish
>>>
>>>   
>>>     
>>>       
>> I think you are correct on the drive type.  The drives I am thinking of 
>> formating are the 6 original NTFS hard drives on the XP Home machine.  
>> When I added the dual boot arrangement to include Ubuntu I had an 80 GB 
>> empty partition on one of the drives and so Ubuntu "/" was placed on 
>> this 80 GB partition when it was installed. 
>>
>> Back to the drive type of these 6 hard drives.  It's too bad I did not 
>> notice that the drives are SCSI and not IDE types. Old age is setting in 
>> and it seems I have memory issues to deal with because of that 
>> condition. (1) Anyway,  I agree that I should be using sda1 instead of 
>> hda1.  (2) Also, but I'm not sure about this, I don't think there is a 
>> problem in me designating sda1 for the first hard dive on which I do the 
>> conversion to ext3 as I don't think the physical location and physical 
>> order of the hard drives is of any importance but rather the label of 
>> the hard drive is probably what the kernel looks for.  (3) Also, in my 
>> case, "/" is on it's own partition and the hard drive label designation 
>> could be labeled sda1 regardless of it's previous windows XP 
>> designation.  (4) Also it has occurred to me that I will have to change 
>> /etc/fstab to get rid of the windows designation of the hard drive I am 
>> converting so that the Linux designation of that drive replaces it. (5) 
>> Also, if I use the designation sda1 it would not be set as ACTIVE as it 
>> would be if  "/" (system root) was placed on it.  Or  am I in error on 
>> one or other of these points???
>>
>> In terms of the formatting command line you seem to be saying that for 
>> many small files the following might be in order:
>>
>> sudo /sbin/mkfs.ext3 -c -b 1024 -L SDA1 -v /dev/sda1
>>
>> In other words (for small files like web pages) go with the defaults but use block size 1024 which I think is the smallest???
>>
>> Hope to hear more input from you and others -- Thanks, Ted
>>   
>>     
> Sorry it has taken me so long to get back to you, but we've all got to 
> sleep sometime, and it was my turn. 8-)
>
> I'm pretty sure I was wrong about your drives being SCSI, I should have 
> said SATA, which is what they most likely are. SCSI is an old clunky 
> interface that isn't used too much anymore.
>
> That being said I must also tell you that I'm not really going to be 
> able to hlep you in any kind of timely manner, as I'm a new Linux user 
> myself, and all of the answering I've done so far was based on about an 
> hour's worth of reading in some man pages, and some html docs in the 
> /usr/share/doc/ folders.
>
> To be able to help you any further, is going to require me to do much 
> more reading up on the subject at hand first. Face it, this is the blind 
> leading the blind. 8-) I did do a grep for "sda" in the /usr/share/doc/ 
> folders, and redirected the output to a file, and have found the udev 
> manual, which seems to be so far bearing out your claim that you will be 
> able to name each of your drives whatever you want, regardless of their 
> actual physical connections as seen by the kernel.
>
> However it involves the use of symlinks to do so, not the -L parameter 
> being applied to the drive to label it. Do some reading in the following 
> file -
>
> <file:///usr/share/doc/udev/writing_udev_rules/index.html>
>
> You may have to install the udev document package first if it's not 
> already there on your machine. I've been going nuts with the 
> documentation installs lately, and getting every man page, info page, 
> and html doc I can for the Linux system, so I can learn what I'm doing 
> with this new operating system.
>
> Your format command line looks good to me so far, but then I've never 
> had the opportunity to format any drives in Linux yet, so how would I 
> know? Also, I have no idea how to make a partition active.
>
> I'm pretty sure Karl was right when he told you that using the gparted 
> partition editor from a Live CD would allow you to do all of what you 
> want, and all from the same interface as well.
>
> Later, Ray Parrish
>
>   
Maybe I'm wrong on what I am about to say but I don't think so. Yes Karl 
is right that I can use a live CD while Ubuntu is non functional and 
utilize  tools from the CD to operate on the hard drives. A lot of 
people use Knoppix designed for that purpose especially if they have 
some kind of rescue situation affecting the operation of their Linux 
Ubuntu OS.  But that is not the case with me because my Ubuntu operation 
is currently running just fine.  If I want to use gparted I can do so 
right from the Ubuntu command line.

Also, in the deep past when ever I added a hard drive to a machine I 
physically installed the hard drive as master or slave and then 
formatted the drive to create the file system.   We are here talking 
about a whole hard drive and not partitions on a hard drive. If I wanted 
partitions I would have to use something like gparted (there are others 
that do about the same thing)  and define the boundaries of each 
partition and then for each partition make a file system (like ext2 or 
ext3 or others if we are talking windows). One cannot be using the hard 
drive while also trying to create fiile system(s) on it. I am not sure 
but I think the hard drive should not be mounted while partitions and 
file systems are being built on it.  Windows and Linux both mount the 
hard drive as part of the boot process where the system is coming 
together in stages and  for both the BIOS needs to register them -- at 
least for windows this is the case. For a brand new hard drive not 
previously installed and mounted there is no problem using the existing 
system to set up that hard drive. The system has to be shut down and 
then restarted (rebooted) in order for a new hard drive to be mounted 
for windows. Linux needs to have special information in the /etc/fstabs 
file which information is used when the system is booted  from the MBR 
(master boot record) and brings up the system in several stages. I don't 
know the details for windows bringing up the system.  Once a Linux 
system is running it is possible to use the umount comand to dismount a 
hard drive or the mount comand to remount one previously unmounted. This 
can be done right from the command line. So someone using Linux from the 
command line can unmount a hard drive in order to do some kind of 
maintenance on that drive and once that is done remount the hard drive 
from the command line using the mount command. 

In Linux, mounting involves assigning a hard drive designation to a 
system mount identification.  For example, "/dev/hda1" to "/tmp" where 
"/tmp" has to be previously defined by the command line command "mkdir 
/tmp". These are probably the symbolic links you talked about.  Anyway 
these links need to be defined in "/etc/fstabs" if one wants the links 
to be permanent.  Otherwise, they disappear after a shutdown.

All the above just to prove that Ubuntu as a system can be operating 
while one of it's hard drives have been unmounted so certain changes can 
be made and then after the changes have been made the hard drive can be 
mounted. 

Lastly, you were right in saying the hard drives we have been talking 
about were SCSI. They definitely are not SATA drives .  When they were 
installed it would appear that the shop that installed them simply 
installed SCSI drives because there were too many hard drives to use IDE 
drives (usually IDE has at most 4 drives but an extension IDE channel 
card can up this to 8) and I had asked for the kind of operation that 
could only be done at that time with SCSI.  But I did not know this shop 
had done this SCSI installation until you pointed out that these hard 
drives were SCSI. Dumb eh! I had just assumed they had installed the IDE 
channel extension card but not so! I should have looked. Dumb again eh!

So I have decided in the next few days to update "/etc/fstabs" to change 
the linking information and then dismount one of the hard drives (the 
one I identified earlier) and then just format over the exiting 
filesystem replacing it with an ext3 file system per our previous 
conversations. There will be no partitions on this hard drive.Then I 
will remount the hard drive  and hopefully everything will work.

Thanks -- Ted

Hope some of this information will help you. You have been very helpful 
partly as a sounding board and partly because you challenged my memory 
so I started to remember things I had forgotten.
Thanks a lot.
Have a nice day.





More information about the ubuntu-users mailing list