APPLIED: [SRU][N:aws-6.17][PATCH 0/1] UBUNTU: [Config] enable MEMCG_V1 and CPUSETS_V1 on Noble AWS
Magali Lemes
magali.lemes at canonical.com
Tue Mar 17 20:44:42 UTC 2026
On 3/13/26 13:38, Gabriela Bittencourt wrote:
> BugLink: https://bugs.launchpad.net/bugs/2122368
>
> Reference: https://lists.ubuntu.com/archives/kernel-team/2025-September/163113.html
>
> [Impact]
>
> Although v1 cgroups are deprecated in Noble, it was still possible for
> users on 6.8 kernels to utilize them. This was especially helpful in
> the Noble migration process. It allowed users to pick up the new OS and
> then separately upgrade their remaining v1 cgroups applications. This
> unblocked the migration path for v1 cgroups users, because v1 support
> could be provisionally enabled until the necessary support was available
> in the applications that still lack v2 support.
>
> Starting in 6.12, CONFIG_MEMCG_V1, CONFIG_CPUSETS_V1 and
> CONFIG_PROC_PID_CPUSET were added and defaulted to false. Noble 6.8
> users that were unlucky enough to still need these V1 cgroups found that
> they could no longer use them from 6.14 kernel onwards.
>
> Some of the specific failures that were encountered include older JVMs
> that fail to correctly handle missing controllers from /proc/cgroups.
> If memory or cpuset are absent, the container limit detection is turned
> off and the JVM uses the host's limits. JVMs configured in containers
> with specific memory usage percentages then end up consuming too much
> memory and often crash.
>
> Apache Yarn is still completing their v1 -> v2 migration, which leaves
> some Hadoop use cases without proper support.
>
> The request here is to enable these V1 controllers on Noble, but not
> newer releases, for as long as the Noble AWS kernel train still has
> kernels with upstream cgroup v1 support. This gives users a little bit
> longer to complete their migration while still using newer hardware, but
> with the understanding that this really is the end of the line for v1
> cgroups.
>
> [Fix]
>
> Re-enable the missing v1 controllers in the 6.17 Noble config.
>
> In 6.8 there were 14 controllers. In the current 6.17 config there are
> also 14 controllers. However, the difference is that the current 6.17
> build the dmem controller was added, and the cpuset and memory
> controllers were removed.
>
> Diffing both the /proc/cgroups and configs between the 6.17 and 6.8
> releases gives:
>
> -CPUSETS_V1 n
> -MEMCG_V1 n
>
> These differences were also corroborated via source inspection. Changes
> in 6.12 moved these controllers to be guarded by ifdefs that default to
> being disabled via make olddefconfig.
>
> In order to ensure that 6.17 has the same v1 cgroup controllers enabled
> as 6.8, enable both CONFIG_CPUSETS_V1 and CONFIG_MEMCG_V1 for Noble.
>
> [Test]
>
> Booted a kernel with this change and validated that the missing v1
> memcgs were present again.
>
> Before:
>
> $ grep memory /proc/cgroups
> $ grep cpuset /proc/cgroups
>
> with v1 cgroups enabled:
>
> $ mount | grep cgroup | grep memory
> $ mount | grep cgroup | grep cpuset
>
> $ ls /sys/fs/cgroup | grep memory
> $ ls /sys/fs/cgroup | grep cpuset
>
> After:
>
> $ grep memory /proc/cgroups
> memory 12 92 1
> $ grep cpuset /proc/cgroups
> cpuset 13 1 1
>
> with v1 cgroups enabled:
>
> $ mount | grep cgroup | grep memory
> cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
> $ mount | grep cgroup | grep cpuset
> cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
>
> $ ls /sys/fs/cgroup | grep memory
> memory
> $ ls /sys/fs/cgroup | grep cpuset
> cpuset
>
> A config diff of the previous build versus a build cranked from these
> patches:
>
> CPUSETS_V1 n -> y
> MEMCG_V1 n -> y
>
> [Where problems can occur]
>
> Since these changes re-introduce code that was disabled via ifdef,
> there's a possible increase in the binary size. After comparing the
> results from an identical build with these config flags disabled, the
> difference in compressed artifact size for an x86 vmlinuz is an increase
> of 16k.
>
> The difference in uncompressed memory usage after boot is an increase of
> 40k, broken down as 21k code, 19k rwdata, 12k rodata, 8k init, -28k
> bss, and 8k reserved.
>
> The primary remaining risk is around future breakage of these interfaces
> since they are no longer part of the default configuration. If this is
> not part of upstream's test matrix, then there is additional potential
> breakage possible. However, the author has no knowledge of actual v1
> cgroups breakage at the time this patch is being submitted.
>
> Gabriela Bittencourt (1):
> UBUNTU: [Config] enable MEMCG_V1 and CPUSETS_V1 on Noble AWS
>
> debian.aws-6.17/config/annotations | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
Applied to noble:linux-aws/aws-6.17-next.
Thanks!
Magali
More information about the kernel-team
mailing list