API for Upgrader

John Arbash Meinel john at arbash-meinel.com
Sun Jun 23 06:04:13 UTC 2013


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

...

> 
>> type AgentToolsChange struct { Id string
> 
> I'm not sure it's necessary to send the id with every watcher
> response, as it's already known by the watcher.

It depends whether you go with 1 watcher per agent, or 1 watcher
across all agents. If you go with 1 watcher per agent it is hard to
select across a dynamic number of channels. Go 1.1 introduces
something in reflect to do this, but it is still clumsy because you
have to reflect all of your channels into the right types.

The Go 1.0 syntax I've seen is to fire off a goroutine for each
channel you want to select, and then have them put everything onto one
channel. Which leaves you with 1 channel in the end anyway.

Note that LifecycleWatcher watches multiple things and returns a list
of things that has changed. So while EntityWatcher does 1-by-1,
LifecycleWatcher does bulk watches.

It would make it easier to write code that could handle more than 1
thing-being-watched without having to change it later.

> 
>> Tools state.Tools }
>> 
>> 
>> var change AgentToolsChange change, err := <-watcher.Changes()
>> 
>> 
>> 
>> The Id being passed can be either for a Machine agent or a Unit
>> agent. And both types of agents will use the same API.
>> 
>> The initial implementation on the APIServer side will just use 
>> state.WatchEnvironConfig(). With these steps:
>> 
>> 1) Initially we can just fire a watch response on any change to 
>> EnvironConfig. This will trigger a few too many probes, but the 
>> Upgrader code already has a: if proposed ==
>> version.Current.Number { noDelay() break }
>> 
>> So it doesn't create much extra load on the system.
>> 
>> 2) Stage 2 can have the API server track what the current API
>> version is, and only fire the watcher when API version changes.
>> That mostly just changes the load on the API server, because it
>> doesn't have to search for updated tools.
>> 
>> 3) Stage 3 can only tell Machine or Unit agents that there is a
>> new API version once it finds there are binaries for it to
>> download. (we might do this before 2). This handles the
>> heterogenous series and Arch environments when someone doesn't
>> upload all possible tools case.
>> 
>> 4) Stage 4 we can get extra fancy by only telling the Machine
>> that there is a new API version to download, and only once the
>> Machine reports a new Machine API, do we tell the Unit agents to
>> upgrade.
> 
> I think this might be difficult to achieve. In a HA scenario, how 
> do the various API servers coordinate this information? If we want
> to do this, it's probably easier and more efficient to coordinate
> locally.

I'm told we already have a database field for Machine.AgentVersion.
Which means the logic is:

Machine watchers watch the global AgentVersion (in EnvironmentConfig).
When that gets updated, it updates the desired Machine versions. When
the Machine reports back an updated AgentVersion, that triggers the
Units on that machine to upgrade their agents.

No need to coordinate between API servers, as it is just watching
fields in the DB.

> 
>> This helps with the "thundering herds" case. Where someone
>> requests an upgrade, and then every agent ever wakes up and tries
>> to download new tools. (So if you have a Machine with a Unit and
>> a subordinate Unit, you would download the tools 3 times, only to
>> have 2 of them get thrown away)
>> 
>> 5) While it doesn't make a lot of sense today to have a Watcher
>> across multiple agents, it is possible we will change
>> responsibilities (say one upgrader per container).
> 
> I'm not sure what you mean by "having a Watcher across multiple
> agents" here.

1 watcher watching for changes to multiple agents. An example was an
Upgrader that ran separately from the individual agents. So the
upgrader could notice that the Machine needed upgrading as well as the
3 Uniters running on that machine (and presumably itself as well).

> 
>> We pass the Arch and Series to the API Server for this, so we
>> can lookup the Tools URL for the agents directly in the API.
>> Eventually when Ian's patch lands that sort of information may
>> already be stored in the state database, but it isn't there
>> today, and requiring it in the API means you can upgrade to 1.11
>> and still have the upgrader function normally.
>> 
>> Should the change itself be an array: eg: type AgentTools struct
>> { Id string Tools *state.Tools }
>> 
>> type AgentToolsChange []AgentTools
>> 
>> Or should watcher.Changes fire multiple times for multiple
>> units?
> 
> I think we should use the existing watcher architecture where we
> have one watcher resource for each thing being watched and the
> client calls Next on any given watcher to retrieve the next change.
> The "bulk" interface simply returns one watcher for each id in the
> request.

As mentioned earlier, the Go rules about 'select' make this harder to
actually handle watching more than one thing. And LifecycleWatcher
already watches multiple things. So it isn't like the existing Watcher
architecture only ever watches 1 thing.

> 
> That's the way the machine watcher that Martin has just landed
> works, for example.
> 
> cheers, rog.
> 

Has this actually landed? I see a proposal, but it hasn't been
approved yet.

John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.13 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlHGj90ACgkQJdeBCYSNAAMvnwCfSMMDvKl4a9D/mAd9FaCp43EF
N8kAnjA0tW81kfqzKvwKbg5JHmN2GU6J
=MJ4p
-----END PGP SIGNATURE-----



More information about the Juju-dev mailing list