why did Ubuntu turn off kernel preemption in Edgy?

Karoliina Salminen karoliina.t.salminen at gmail.com
Mon Nov 27 08:20:32 GMT 2006


> > And why not have a laptop-battery-friendly kernel version then? What do
> > the likes of Fedora and openSuSE do about this issue? Why not fix those
> > timer related interrupt problems instead? Are the lkml hackers aware of
> > this?
>
> It was a few of those very hackers that told me PREEMPT was a red
> herring, and 1000HZ was not suggested. It's not a bug that 1000 timer
> interrupts a second causes extra CPU load. Think of it this way: 1000HZ
> is 10 times more interrupts than 100HZ. That's 10 times more CPU load
> from just that functionality.

Why the 1000 Hz couldn't be used?
I could put it this way:
CPU overhead raises from 0.1% to 1%. Who can notice that?
However, if this determines how long latency I get with ZynAddSubFx
when I am playing it, meaning the 100 ms latency drops to 10 ms (audio
latency = longest latency that can occur in the system ever, because
any interruption to the filling of the audio buffer results as noise
and the whole track is ruined and the recording has to be started all
over again) yet alone with the 100Hz->1000Hz change and the PREEMPT
improves the situation even further. 10 ms latency is playable since
notes aren't usually shorter than that. However, 100 ms latency that I
was getting on Ubuntu before 2.6.19 kernel is unplayable, thus for
this reason, I have been using OpenSuse 10.1 with Jacklab (which
includes PREEMPT kernel) for audio recording and it works fine. I am
hoping that I could do that with the Ubuntu in the future and I don't
see a reason why it couldn't be both, 1000 Hz + PREEMPT. MacOSX has
low latency always available and so has Windows. The latency and
unresponsiveness has always been a problem with Linux. They don't have
any special different distribution/product/whatever for music
creation, but this all works on everybody's machines out of the box
and everyone investing to a low latency audio hardware will get low
latency and use of software synthesizers in real time with playing
them with keyboard is successful. E.g. I was getting 2 ms latency on
Windows XP with my ST-Audio DSP-2000 C-port. In Ubuntu the same
latency was around 100 ms which is unplayable. I once installed Demudi
on top of Ubuntu and it broke completely. Then I wasn't attempting to
do anything for a while until I was able to install OpenSuse 10.1 with
the Jacklab - it worked out of the box. However, requiring a different
distro (Jacklab) is product-wise quite end user confusing way to
implement that, people expect that they can install one OS and do
everything with that, and that happens with MacOSX and Windows. For
ideological reasons I don't use Windows and for hardware reasons I
don't use MacOSX (I don't have any Mac hardware currently), and on the
other hand I would like to do music with completely free software and
I have been actually lately first time able to do that after a very
long wait - with the Jacklab.

I once mentioned about this latency issue on Debconf and the response
was something like "Isn't it so that only professional musicians need
low latency". Hmm. Maybe unprofessional musicians are better able to
play with a latency that means that e.g. 4 notes gets played before
the sound of the first one gets out of the loudspeakers - a latency
compensating brain or then they do only ambient with notes lasting at
least 10 minutes per each ;) I think the issue is not well understood
in the community since only very few musicians tinker with Linux
because it is so much harder to do any music with Linux than e.g.
MacOSX.

> > Even though I'm not what I think you consider an 'audio user' I notice
> > hickups on rhythmbox's output everytime my disk is hit more often.

Only way to get out of hickups is to increase the length of audio
buffers == more latency.
Disk access causes hickups of course, but the thing is that disk is
often used heavily at the same time as some music application -
because e.g. Ardour is recording while some audio applications are
playing. The output of the software synthesizers goes to Ardour as
audio track and any crackle in the output for any reason at any time
ruins the recording and it has to be done again with longer audio
buffers == more latency. And when the latency gets over about 20 ms,
it stops being playable live with the black and white keyboard and it
renders the musician's use case as not working.

My use case is the following:
- I play with a soft synth while Ardour, Muse or something else audio
multitrack recorder is recording the audio stream.
- I may record the track as midi at the same time
- If there are some glitches in the playing, I go to edit the midi
track and after corrections replay so that the sequencer plays the
soft synth and the audio track gets replaced with the corrected one.
- Then I do some editing to the audio track and go to record next
track on top of it

This is basically how I do all my music. With Windows software this
used to be very easy, but in Linux this has been quite problematic
since e.g. same program can't record MIDI and audio and then it comes
to the synchronization of two programs, routing problems (jack) etc.
But that has nothing to do with the distribution of course and someday
it all will be fixed I hope so that a non-technical musicians can
happily install Linux and do electronic music with the nice software
synthesizeres it offers even today (e.g. ZynAddSubFx).

Best Regards,
Karoliina Salminen



More information about the ubuntu-devel mailing list