[ANN] charm-tools 1.9.3

Simon Davy bloodearnest at gmail.com
Fri Nov 27 22:02:25 UTC 2015


On Friday, 27 November 2015, Marco Ceppi <marco.ceppi at canonical.com> wrote:
> On Thu, Nov 26, 2015 at 3:05 AM Simon Davy <bloodearnest at gmail.com> wrote:
>>
>> On Thursday, 26 November 2015, Marco Ceppi <marco.ceppi at canonical.com>
wrote:
>> > On Wed, Nov 25, 2015 at 4:08 PM Simon Davy <bloodearnest at gmail.com>
wrote:
>> >>
>> >> On 25 November 2015 at 16:02, Marco Ceppi <marco.ceppi at canonical.com>
wrote:
>> >> > ## Wheel House for layer dependencies
>> >> >
>> >> > Going forward we recommend all dependencies for layers and charms be
>> >> > packaged in a wheelhouse.txt file. This perform the installation of
pypi
>> >> > packages on the unit instead of first on the local machine meaning
Python
>> >> > libraries that require architecture specific builds will do it on
the units
>> >> > architecture.
>> >>
>> >> If I'm understanding the above correctly, this approach is a blocker
for us.
>> >>
>> >> We would not want to install direct from pypi on a production service
>> >>
>> >>  1) pypi packages are not signed (or when they are, pip doesn't verify
>> >> the signature)
>> >>  2) pypi is an external dependency and thus unreliable (although not
>> >> as bad these days)
>> >>  3) old versions can disappear from pypi at an authors whim.
>> >>  4) installing c packages involves installing a c toolchain on your
prod machine
>> >>
>> >> Additionally, our policy (Canonical's, that is), does not allow access
>> >> to the internet on production machines, for very good reasons. This is
>> >> the default policy in many (probably most) production environments.
>> >>
>> >> Any layer or charm that consumes a layer that uses this new approach
>> >> for dependencies would thus be unusable to us :(
>> >>
>> >> It also harms repeatability, and I would not want to use it even if
>> >> our access policy allowed access to pypi.
>> >>
>> >> For python charm dependencies, we use system python packages as much
>> >> as possible, or if we need any wheels, we ship that wheel in the
>> >> charm, and pip install it directly from the there. No external
>> >> network, completely repeatable.
>> >
>> > So, allow me to clarify. If you review the pastebin outputs from the
original announcement email, what this shift does is previously `charm
build` would create and embed installed dependencies into the charm under
lib/ much like charm-helper-sync did for instead for any arbitrary Pypi
dependency. Issues there are for PyYAML it will build a yaml.so file which
would be built based on the architecture of your machine and not the cloud.
>>
>> Right. This was the bit which confused me, I think.
>>
>> Can we not just use python-yaml, as its installed by default on cloud
images anyway?
>>
>> We use virtualenv with --system-site-packages, and use system packages
for python libs with c packages where possible, leaving wheels for things
which aren't packaged or we want newer versions of.
>>
>
> Again, this is for hook dependencies, not exactly for dependencies of the
workload.

Right. I understand that :)

I detailed how we solve this for our python app payloads as a possible
solution for how to solve it for python charm deps also, but of course
those deps would be completely separate things, not even installed in the
same virtualenv.


> The charm could apt intall python-yaml, but using --system-site-packages
when building is something I'd discourage since not everyone has the same
apt pacakges installed.

Except that they do specifically have python-yaml installed, I believe. Its
installed by default in Ubuntu cloud images, due to cloud-init I think.

But yes, other system python packages could be exposed. I wish once again
there was a way to include specific list of system packages in a virtualenv
rather than all them.

And it should be easy enough to add a way to declare which system packages
are required by a layer?

>Unless that user is building on a fresh cloud-image there's a chance they
won't catch some packages that don't get declared.
> We'd be interested in making this a better story. The wheelhousing for
dependencies not yet available in the archive instead of embedding them in
the charm was a first step but certainly not the last. I'm not sure how
this would work when we generate a wheelhouse since the wheelhouse
generation grabs dependencies of the install. That's why PyYAML is showing
up in the generated charm artifact. We're not explicitly saying "included
PyYAML" we're simply saying we need charmhelpers and charms.reactive from
PyPI as a minimum dependency for all charm hooks built with charm build to
work. Suggestions around this are welcome.

Right, the wheelhouse seems a good approach for that. I'm just wondering if
we can do a specific solution for python-yaml that avoids the binary wheel
(which AIUI is brittle even on same arch, due to glibc/cc verions)

I would be surprised if there were many charm python deeps that had c
extensions ?

Also, iirc there is a pure python yaml lib? Given speed is not an issue
here (as the yaml is small and infrequently parsed), maybe we could look at
that?


Thanks

-- 
Simon




-- 
Simon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.ubuntu.com/archives/juju/attachments/20151127/50d3d1af/attachment.html>


More information about the Juju mailing list