Slower performance with ext4

Christopher Chan christopher.chan at bradbury.edu.hk
Mon Nov 2 02:36:17 UTC 2009


Rashkae wrote:
> Christopher Chan wrote:
>   
>> Rashkae wrote:
>>     
>>> Chan Chung Hang Christopher wrote:
>>>   
>>>       
>>>> fyrbrds at netscape.net wrote:
>>>>     
>>>>         
>>>>>  
>>>>>   >Data loss anyone?<
>>>>> What evidence do you have that there would be data loss? ext2 and ext3 were used almost immediately after their release as well. The distro maintainers usually do some basic reliability tests or at least have access to such tests. So I would be happy to read any tests you've seen that suggest ext4 is unreliable. To start scaring people with talk of data loss based on random speculation would not be good. 
>>>>>
>>>>>   
>>>>>       
>>>>>           
>>>> Dude, I used to work with clusters of mta boxes. The last thing I needed 
>>>> then was a filesystem that loses data or corrupts its metadata easily. I 
>>>> wait before using any new fangled filesystem regardless of how uber fast 
>>>> it is or I play the pull the plug game with them with whatever 
>>>> journaling mode they have available.
>>>>
>>>>
>>>> ext4 data loss reports started with Ubuntu Jaunty I think too?
>>>>
>>>>     
>>>>         
>>> The early data loss in Jaunty was really applications clobbering their
>>> own files combined with EXT4's delayed allocation.  Basically, EXT4 was
>>> behaving, for all intents and purposes, like XFS, without the null
>>> bytes.  (I still question the sanity of whoever thought this would be a
>>> good idea.. after all, wouldn't be all be using XFS years ago if this
>>> behaviour was so superior?)  Following patches back ported to change
>>> that introduced kernel soft lock bug in the ubuntu kernel (that was
>>> never confirmed in the mainline kernel.).  And now we have uncomfired
>>> sightings of data corruption, but the one person who claims to reproduce
>>> that looks like he has memory corruption issues.  (He gets a different
>>> md5sum every time he checks the same file... not really a filesystem
>>> issue there.)
>>>
>>> None of this is really applicable to your point.  for a mission critical
>>> production system, you want to use what's known and proven (I do find
>>> the choice of jfs odd however.  I like EXT3 for reliable and
>>> predictable, and XFS for performance, so long as I know my particular
>>> workload won't be affected by XFS's null bytes on unclean shutdown.)
>>>   
>>>       
>> XFS blooming aggressive caching and lack of full journaling is a 
>> disaster waiting to happen for mta queues. If you are running Centos, 
>> you only get ext3...
>>
>>     
>
> XFS has as much journaling as any other candidates.  Journal for
> metadata.  And all MTA's, reportedly, write files in a sane manner and
> never assume a file is written to disk until the fsync completes, and
> therefore, are not at all affected by XFS aggressive caching.  Mail
> server is therefore one of the workloads XFS is best suited for.
>
>
>   

Journaling only for metadata is not 'as much journaling as any other 
canditates.' You cannot say metadata journaling only as equivalent to 
the data and metadata journaling that is possible with ext3. XFS's 
journaling only provides filesystem metadata consistency which is why 
you get files full of NULLs after a crash/power out. MTAs rely on fsync 
calls and how a filesystem behaves in regards to fsync requests is the 
real determiner of whether there is a data guarantee or not. XFS does 
not provide data guarantee. It, at best, provides a metadata guarantee. 
XFS should not be used for mta queues unless it is in conjunction with 
hardware raid that has a bbu cache. XFS is best suited for streaming 
applications where the data loss is tolerated.

>   
>> JFS seems to have the second best performance overall according to Bruce 
>> Guenter's maildir simulated local mail delivery benchmark. and it is 
>> stable too.
>>     
>
> JFS performs great in benchmarks, but back when I used to use it, I've
> consistently been able to bend it out of shape under real world
> conditions.. No data loss mind you, but damaged  meta data (fixed with
> jfs repair, but that should never be needed in a modern file system) and
> bizarre corner cases that caused performance to sink through the floor.
>  (in one instance, I was able to reproduce an issue where reading a file
> while writing new files to disk would perform poorly depending on
> whether the filename had one . or two.  Ie, if the filename was
> something.tar.gz, or renamed to something.tgz.)  At one time in the
> distant past, someone completely broke quota support in JFS, and no one
> even noticed for 4 kernel releases.  JFS just doesn't seem to have
> enough people using it to maintain a well tested status.
>
>
>   


I would put that to no one being bothered to report bugs and also the 
lack of users.




More information about the ubuntu-users mailing list