Ubuntu ext4 file system hangs/slow at random times
ubuntu at linuxpowered.net
Tue Apr 17 00:33:43 UTC 2012
Hello folks -
I have an unusual problem on some Ubuntu VMs that seems to happen
I've seen it on maybe 3-4 different VMs at this point, on about 8
in the past couple of months. The only solution out of the issue is to
cycle the VM.
One of the symptoms is very high system cpu usage with low I/O usage.
Software: Ubuntu 10.04.3 64-bit (2.6.32-37-server)
File system: ext4 w/LVM
Hypervisor: ESX 4.1 Update 2
Storage: the volume can either be a VMFS volume or a raw device mapped
from our fibre channel SAN (3PAR F200), all
storage is thin provisioned.
Drivers: Using the paravirtualized SCSI adapters in ESX
The behavior is that at a random time the ext4 file system seems to get
Any process accessing the file system gets really slow access, and gets
in a 'D' state. Underlying I/O performance is good with both service
and average wait times under 1 millisecond. In one situation at least I
to do a tail on a sub 100 byte file during this sort of behavior and it
Kernel dumps out tons of messages saying things were waiting longer
seconds to run.
I have spent a least a couple hours searching on this topic but have
much information. I enabled ext4 event tracing in the
/sys/kernel/debug/tracing/set_event and those events are here:
I also put a snapshot of iostat running too.
I/O activity that has triggered this has varied:
- First time I saw it was when I was doing a parallel rsync of a few
of data. I thought I worked around it by disabling file system
I noticed that at least in the RHEL 6.0 technical notes they require
to be disabled when running enterprise storage (which we are). Our
mirrored cache and is battery backed.
This volume was a raw device map.
- Basic log rotation of medium sized log files from a VMFS-based ext4
a NFS volume
- RRDtool activity from cacti to a raw device mapped volume
- Basic Splunk search/indexer activity(what I saw today).
3 different systems that I can think of off the top of my head. In
every case the
VM in question has at least two different virtual disks, and only one
virtual disks is affected. The other one (root partition) is not.
Now that I know disabling barriers doesn't help I have moved two of the
systems to be ext3 instead didn't re-format just remounted). I'm not
or not Ubuntu uses barriers by default on ext3 as well or just ext4.
If the solution is to stick to ext3 I have no problem with that -
ext4 that I need that I can think of really.
Though it would be nice if there was some fix to the issue.
I have about 150 VMs spread over 8 hosts connected to the same storage.
the VMs are managed the same way so they all have the same software and
If there is any other debug data I could gather that would be useful
time this happens (assuming ext3 didn't fix it), please let me know.
This is my first experience with Ubuntu + ext4 + LVM on an enterprise
array. Not that I expected much of a different experience from
(v4 and v5) + ext3 + LVM on the same array technology which I had been
for years w/o issue.
More information about the ubuntu-users