thoughts on priorities

roger peppe roger.peppe at canonical.com
Thu May 2 10:40:47 UTC 2013


On 2 May 2013 10:53, William Reade <william.reade at canonical.com> wrote:
> On Thu, 2013-05-02 at 10:35 +0100, roger peppe wrote:
>> In 2 May 2013 10:09, William Reade <william.reade at canonical.com> wrote:
>> > On Tue, 2013-04-30 at 11:07 +0100, roger peppe wrote:
>> >> I don't think this is too hard. Currently the only thing that an
>> >> Upgrader does with respect to an Environ is to list the available
>> >> tools. We can quite easily make that capability available across
>> >> the API (or store the tools list in the state if we want to
>> >> reduce provider calls). Then all the existing logic will work as is.
>> >> We might possibly wish to change it in the future, but it seems
>> >> like a reasonable way forward to me.
>> >
>> > We need a watcher, and the existing environment config watcher is
>> > unsuitable. The upgrader will have to change to account for that, at
>> > least; and we may as well implement that watcher to do what we
>> > ultimately want, which is deliver the tools we actually need in the
>> > first place.
>>
>> There's a significant difference there though.
>>
>> Currently we watch a single thing (the environment config)
>> for the version change. It seems like you are proposing
>> that every agent should be able to watch for changes to its own
>> tools. That seems to imply that we'd need to
>> set the tools individually for each agent, which means
>> that we need to do that all in one transaction, or we
>> need another agent that is responsible for running the tools
>> selection logic and setting the tools for each agent based
>> on the global version.
>
> I didn't intend to imply any of the things you inferred. Do you disagree
> that the environment config watcher is unsuitable here? It spews secrets
> all over the place...

Not at all. But I think a cheap (and sufficient and not inelegant) solution
is just to move the version into another document and have a watcher
for that, meaning that almost all our logic remains unchanged and
we have more time for implementing other stuff.

>> I suggest that the current distributed model works quite well - all
>> agents watch only the global current version, and then they
>> each run their own tools selection logic. Then we're using
>> a model that's basically the same as we're using now
>> and no agent or unlimited-size transaction is required.
>
> Neither of those things are required. We can use roughly the same state
> infrastructure *inside* the API, we just access it from a different
> place and (ideally) send down information that's actually correct
> (rather than a vague suggestion that, meh, something might have changed
> and you might be able to respond to it).

This is something that every agent will be watching. If every agent
is watching for exactly the same data (the global version changing)
then we're doing less work on the API hub and we can potentially
use less resources by caching the information there.

If the version changes, an agent *will* respond to it, apart from
in the rare case that the tools are not available for that agent. That's a case
not a case we need to be too concerned about IMO.

I also feel that from an architectural point of view, it makes sense
to have each agent responsible for choosing what tools it will run.
I'm not sure the centralised logic is a big help here, and
may make some things harder in the future - for example
in a cross-provider juju we may want different agents to fetch
from different tools repos.

>> > At the moment, we have a big blind spot wrt upgrades to versions that
>> > aren't available for a particular series/arch, and I feel like a
>> > `Machine.WatchTools()` with `Changes() <-chan *state.Tools` fits neatly
>> > with the (IMO) requirement that we store machine arch in state (so we
>> > can avoid half-upgrades that don't help anyone).
>>
>> I'm not sure how this helps. Surely the way to get around this problem
>> is to avoid upgrading to a version that we don't have a full set of tools
>> for? ISTM that that would be better by finding the current set of
>> machine architectures
>> and querying the tools to make sure that we have the right version
>> of the tools for all of them, before upgrading.
>
> Yes; but to do that we'd need to know the arches of all the machines in
> play; and if we had all that information accessible to the API in the
> first place I'm not sure why we'd want to spend roundtrips on getting
> redundant data, rather than just sending the right stuff in the first
> place.

I'm not sure I understand what you're saying here.

>> The only way we can cut off state access is by removing the
>> mongo accounts or changing the passwords on them.
>> It seems to me that that's a thing that we could provide an API
>> call for - once everything has been upgraded to use the API,
>> the user can decide to cut off mongo access. So  I don't think we
>> *need* a stop-the-world major-version upgrade for this.
>
> Hmm, I think that's still potentially racy. If *agents* cut off their
> own API access, though... yeah, we can probably make it work. Nice.

It's only racy if there are still some agents around that are directly
accessing mongo AFAICS. If by "cutting off their own API access"
you mean "stop talking to mongo directly", then I'm with you.



More information about the Juju-dev mailing list