upstart configuration

Saravanan Shanmugham (sarvi) sarvi at
Sat Nov 8 06:03:42 GMT 2008

Hi Scott,
   This is very good stuff.

On a first scan what you are proposing below is quite similar in thought
to my suggestion, but for the sytnatx. 
Your syntax does look cleaner.
I am going to some time review this in detail thinking through  some
examples based on your proposal and will get back to you soon.

I do have one quick clarification.
You talk about Job, State and Event as 3 separate concepts, yet
currently we have the "Jobs as States" concept, i.e. a Job without a
Which seems to be working quite nicely. Is what you are proposing really
a departure from this? Is "state" that you are refering to really the
current "Job as a State"?

Except that you have added the "while" syntax, which seems to be
applicable to Job as well as states. I am not sure I understand the
Could you explain "And states can be combined with jobs in the same
definition, this actually creates one of each, and binds the two

My team is interested in this featureset. 
How can we get involved in implementing and testing your proposal?


-----Original Message-----
From: upstart-devel-bounces at
[mailto:upstart-devel-bounces at] On Behalf Of Scott James
Sent: Thursday, November 06, 2008 9:04 AM
To: upstart-devel at
Subject: Re: upstart configuration

This thread made me realised that I've not really discussed the 0.10
plans yet, so seems as good a time as any.  This is all still rough
draft at the moment, so forgive any hand-waving and please feel free to
jump in.

Jobs, States and Events are separate; but connected.

A Job can be defined without a state (but gets one when started).
A State can be defined without a job (and never gets one).
Events are caused by State changes.
States can be defined based on States and Events.
Jobs can be started by Events during States.


Here's a job definition ("prototype"), it looks pretty similar to how
things are now.

	exec /sbin/daemon

In order to start this, we have to create a job and start it:

    # start --new foo

And you can start as many as you like (pending resources, etc.), we
obviously need to identify them so you can stop them.

They can only be stopped manually, nothing will automatically stop them
save shutdown.

Here's a state definition, we don't consider these prototypes because
they're inherently active from the moment they are created:

        while spongebob or squarepants

This state will be true while either the spongebob or squarepants states
are true (or both).  We can also do while both are true:

	while spongebob
	while squarepants

This uses "while" as the operator because "while x and y" has an "OR"
meaning in English[0] ;)

This state can obviously be used in other state's matches.

And states can be combined with jobs in the same definition, this
actually creates one of each, and binds the two together:

        while apache2 running
	while dbus running

	exec /sbin/apache2-dbus-control

The state for this job will be true whenever apache2 and dbus are both
running, and the daemon will be automatically started and stopped within
this state.

"while apache2-dbus-control" isn't sufficient for dependencies, since
there's no guarantee it'll be running: instead you get the "running"
substate to use like we've done here.

You can manually stop and start it, but only when the state is true!  If
the state is false, it cannot be started (but see below :p)

What if you don't want the job always running in this state, but want to
wait for an event?

        while some-system-state
        on event

The job state will still be true, but the job won't be started; instead
event must happen *as well*.

We can also make a job manual:

        while some-system-state

This must be started and stopped manually, but there's no need to
specify --new since it's available to be started provided
some-system-state is also there.

Remember all that hassle about "instance" before, that's gone away.
Let's modify our apache2-dbus-control job to use the session bus

        while apache2 running
        while dbus-session running

        exec /sbin/apache2-dbus-control

If there are multiple instances of dbus-session running, you will have
multiple instances of the apache2-dbus-control job - one per session bus

Assumedly the exported environment of that job gives this one enough
information (ie. the bus address)

What if apache2 can have multiple instances too, one per site perhaps?

In that case, you'd get one instance of apache2-dbus-control per
instance of apache2 per bus.


If there were two instances of each.

You can minimise this with the "any" keyword:

    while any apache2 running

Now you just get one if there *any* apache2 instances running ;)

We also do match minimisation, take the following two jobs:

        while dbus-session running

        while dbus-session running
        while frodo running

Now, imagine we have two session buses, -1 and -2; you'd get two frodo
jobs running.  One for each.

frodo-1 DBUS_BUS_ADDRESS=...-1
frodo-2 DBUS_BUS_ADDRESS=...-2

You'd also expect to get two bilbo jobs running PER bus address PER
frodo job, in other words, four different jobs.

bilbo-1 DBUS_BUS_ADDRESS=...-1 [ frodo-1 DBUS_BUS_ADDRESS=...-1 ]
bilbo-2 DBUS_BUS_ADDRESS=...-1 [ frodo-2 DBUS_BUS_ADDRESS=...-2 ]
bilbo-3 DBUS_BUS_ADDRESS=...-2 [ frodo-1 DBUS_BUS_ADDRESS=...-1 ]
bilbo-4 DBUS_BUS_ADDRESS=...-2 [ frodo-2 DBUS_BUS_ADDRESS=...-2 ]

Obviously this is wrong, since frodo and bilbo are both connected to the
same bus.  So some intelligence is employed.  The "while dbus-session
running" object is actually shared between all jobs that specify it; and
you only expand it once for any given job.

This means that you only get two bilbo, one for each session bus - each
with the frodo *for that session bus* attached.

bilbo-1 DBUS_BUS_ADDRESS=...-1 [ frodo-1 DBUS_BUS_ADDRESS=...-1 ]
bilbo-2 DBUS_BUS_ADDRESS=...-2 [ frodo-2 DBUS_BUS_ADDRESS=...-2 ]

Re-use of match objects is true for both states and events, and actually
crosses the "while/on" boundary.  Take for example these two stanzas:

    while event
    on event

They both have different purposes, one specifies a state condition the
other a start condition for a job.

But they both use the same underlying match object, and just do
different things for it.

Each time "event" becomes true, a new state is created for the "while
event" match.  This state never becomes false, it means "has this event
ever occurred".  If nothing uses it, it can be discarded the next time
the event happens, since the condition remains true.

Each time "event" happens, any waiting jobs with "on event" will be
started.  This is simply an edge detection, so no memory is required.

Most usefully though, it means that when you install a new job, the
memory of the current state of events is still in Upstart.  Provided
that another job (or just a state) has referenced any events you use in
your "while" condition, Upstart will know it's true.

Of course, you'll mostly use states rather than events for  jobs; with
the system provided states for the most comment events.

One last thing, you've seen that "while event" works and remains true
forever.  You can also do periods between events:

    from event-a until event-b

This results in a state that becomes true for each time that event-a has
occurred and false for each matching event-b.

Thus many states will be pre-shipped like this.

[0] "while eating and drinking" means while doing either, not both
Have you ever, ever felt like this?
Had strange things happen?  Are you going round the twist?

More information about the upstart-devel mailing list