Infrastructure vs. Interface
Felipe Figueiredo
philsf79 at gmail.com
Thu Jul 2 21:39:44 UTC 2009
Patrick,
thanks for your comments. I'll address them separately.
On Thu, 2009-07-02 at 13:40 -0500, Patrick Goetz wrote:
> The counterexample to this can be found in the just posted Pulseaudio
> v0.9.16~test1 availability announcement: Everyone seems to agree that
> audio on linux needs to suck less, so upgrades in this area are vital,
> yet this version of Pulseaudio depends on the 2.6.31 kernel. If the
> kernel isn't in Infrastructure, then what else could be?
snip
> The problem with this idea in general is that most people always want
> the newest, latest and greatest stuff. Just reading about the 2.6.30
> kernel here:
> http://www.h-online.com/open/Fine-tuning-What-s-new-in-Linux-2-6-30--/features/113478
> makes me wonder how I ever lived without it -- only a fool would use
> 2.6.28! <:)
snip and reorder
> I care a
> lot more that my ext4 filesystem doesn't completely lose a file that
> I've been working on for 8 months and just saved than I do if the
> installed Firefox is 3.0.14 or 3.0.19.
While it may be possible for particular users (likely it happened at
least once for you, me and others who follow this list), I don't think
this is a frequent issue for the "Persona" Ubuntu seems to address
(mentioned in [1] and [2]. I wanted to stick with Hardy because it was
an LTS, but convinced myself to upgrade to Jaunty [3] basically because
I was curious about the new notification system (I know, I suck at
coherence). But this is not the use case I'm advocating for: I could as
well be using Debian testing, and backport Ayatana, or recompile from
the PPAs. When I think about Ubuntu, I think about my sister who is a
journalist and couldn't care less about the kernel, ext3 vs. ext4, etc,
my mom who barely knows the difference between OOo and MS Office, and my
dad who is truly computer illiterate (he probably thinks a PC is to a
writing machine as a PDA is to an agenda). These people don't need the
latest version of everything.
OTOH, after some ponder, I'm not sure the kernel *must* be frozen, for
the exact reasons you pointed, as well as improved hardware support.
Although the essence of what I'm proposing is a less edgy stance, and
more focus in trusted and proven technologies. Regardless, I think the
original idea already deals with this, if you consider the kernel could
be updated every 6 months. This way an LTS user could easily backport
the kernel if s/he really needed it.
> There is just as much if not more critical development
> going on at the infrastructure level (the kernel, filesystems,
> HAL->DeviceKit, etc.) as there is at the application level.
Good example. When I first heard about HAL, it seemed cool and I thought
all hardware drivers abstraction was solved once and for all. Now, it's
being replaced not sure why (OT, nevermind), and it will be replaced by
udev rules. Would it be possible for everyone to agree with one true
system (to rule them all)?
See also OSS4 vs ALSA topic. Say I'm a hardware manufacturer, and want
to create drivers for my sound chips, because my son's friend from
school convinced him to tell me it was a nice idea (and kids always get
their parents to bend, don't they?). I invest manpower to release a test
driver, and by the time it reaches beta stage, the linux community has
changed to another backend, which is incompatible. What happens? (hint:
my son gets grounded).
One good example of convergence is CUPS, that supplanted the old
obsolete lpd, and several lpd replacements. I agree it's important to
have diversity (lprng is still around, isn't it?), but in some cases
diversity leads to duplicity of work, and more projects getting started
than finished, goals not being met and frustration. IMO we would benefit
if the CUPS example was followed in each stack of the OS.
> Also, what does stability at the binary level for third party developers
> mean? I think you're taking it to mean that the underlying packages
> stay the same, but this is the wrong level of granularity, IMO. What's
> important for third party developers is that the API's stay the same.
Exactly. Sorry I wasn't clear, but by stable I meant the "debian" sense,
frozen. I just realized that I might have been inadvertently influenced
by the MacOSX cycle in this whole idea. They provide something similar
to SRUs to the supported releases, and third party developers know if
they create or port an application for that OS, they'll have that
environment for a "reasonable amount of time" (sic). Compare this to
frequent changes in infrastructure (in the worst case scenario) every 6
months.
Ubuntu drinks heavily in this open source custom. Let's not forget that
while it's one of the strongest advantages of FLOSS, it can sometimes be
a hassle, when you consider commercial support (I'm sure at least some
people who work with user support would agree, but ymmv).
One example of how this affects negatively the OS is the difficulty to
maintain documentation (wikis or otherwise) up to date. When the
backends change, it's *hard* to keep users informed.
> Of course this is a tricky issue: maintaining backwards compatibility
> is another word for keeping obsolete, crufty old code in your system;
I think you're taking this particular issue to the extreme. See above.
> it's usually better to make a clean break whenever possible, otherwise
Occasionally, yes, definitely not *whenever possible*.
regards
FF
1. https://wiki.ubuntu.com/Usability
2. https://wiki.ubuntu.com/PaperCut
3.
http://sciwannabe.blogspot.com/2009/06/why-i-upgraded-my-hardy-to-jaunty.html
More information about the Ubuntu-devel-discuss
mailing list