When trying to make container archives in CBS for CentOS Hyperscale using kiwi, I get a bizarre error:
[ DEBUG ]: 23:16:53 | EXEC: [buildah commit --rm --format oci kiwi-container-xly70m kiwi-image-bjab0z:tag-y9pac1] [ DEBUG ]: 23:17:31 | EXEC: Failed with stderr: Getting image source signatures Copying blob sha256:7b1dce08395eec884b14ff7af786d47e0a9f696cccda8ebcf001e2a6be63da9c Error: committing container "kiwi-container-xly70m" to "kiwi-image-bjab0z:tag-y9pac1": copying layers and metadata for container "1eac0f7c73d10820be99ceb4aa146c2344af23213cd22b465dca4b9dd6301322": writing blob: adding layer with blob "sha256:7b1dce08395eec884b14ff7af786d47e0a9f696cccda8ebcf001e2a6be63da9c"/""/"sha256:7b1dce08395eec884b14ff7af786d47e0a9f696cccda8ebcf001e2a6be63da9c": unpacking failed (error: exit status 1; output: remount /, flags: 0x44000: invalid argument) , stdout: (no output on stdout) [ DEBUG ]: 23:17:31 | Looking for buildah in /usr/bin:/bin:/usr/sbin:/sbin [ DEBUG ]: 23:17:31 | EXEC: [buildah umount kiwi-container-xly70m] [ DEBUG ]: 23:17:31 | Looking for buildah in /usr/bin:/bin:/usr/sbin:/sbin [ DEBUG ]: 23:17:31 | EXEC: [buildah rm kiwi-container-xly70m] [ DEBUG ]: 23:17:32 | Looking for buildah in /usr/bin:/bin:/usr/sbin:/sbin [ DEBUG ]: 23:17:32 | EXEC: [buildah rmi kiwi-image-bjab0z:tag-y9pac1] [ DEBUG ]: 23:17:32 | EXEC: Failed with stderr: Error: 1 error occurred: * kiwi-image-bjab0z:tag-y9pac1: image not known , stdout: (no output on stdout) [ ERROR ]: 23:17:32 | KiwiCommandError: buildah: stderr: Error: 1 error occurred: * kiwi-image-bjab0z:tag-y9pac1: image not known , stdout: (no output on stdout)
This is observed in the Koji tasks for building Hyperscale container images:
Something odd is going on with buildah inside of the mock chroot, and I'm not sure what...
buildah
FYI @dcavalca @salimma
Also to note, this is a new kind of build we've never done before in CBS.
Metadata Update from @arrfab: - Issue tagged with: high-trouble, investigation, low-gain
As it seems it's something that was never tested in another koji env, maybe worth first reaching to Koji through their issue tracker to see where the problem is ? (is that even documented ?) /cc: @tkopecek
Hmm, can't reproduce it locally, as kiwi fails on validation of description. It fails for me on "Extra element locale in interleave". @ngompa do you know what is different in kiwi installed in cbs and that one which is in F42 (kiwi-*-10.2.19-1)?
@tkopecek : well, there is the small one deployed on kojid hosts, which is https://cbs.centos.org/koji/buildinfo?buildID=52418 (infra tags) so kiwi-9.25.21-2.el9 but inside the buildroot, that should be the version from epel{9,10} that should be picked up :
kiwi-9.25.21-2.el9
Example from the jobs linked above :
DEBUG util.py:463: Installing group/module packages: DEBUG util.py:463: btrfs-progs x86_64 6.12-3.el9 build 1.2 M DEBUG util.py:463: distribution-gpg-keys noarch 1.110-1.el9 build 656 k DEBUG util.py:463: kiwi-cli noarch 10.2.18-1.el9 build 33 k DEBUG util.py:463: kiwi-systemdeps x86_64 10.2.18-1.el9 build 12 k
I can reproduce it now. @ngompa I've seen https://gitlab.com/CentOS/Hyperscale/releng/kiwi-descriptions/-/commit/811942799035073cbbd0e6eb6f6400163ad992cf#92a4bea549761ae9d68fe17b26601d3bbe3d922a - were you successful with it? I believe it is a good direction as it seems that chroot doesn't allow buildah to access the disk in "default" way. Not sure how these env variables should be propagated to kiwi, but this .env file seems to be ignored.
It did not work.
Definitely reproducible locally. I was able to workaround it by running it with mock.new_chroot=1
mock.new_chroot=1
Is it possible to set environment variables in a koji tag for specific tasks?
yes, via rpm.env.<VARIABLE>=<VALUE>. https://docs.pagure.org/koji/using_the_koji_build_system/#tuning-mock-s-behavior-per-tag
rpm.env.<VARIABLE>=<VALUE>
Could you try setting these variables and seeing if it fixes it?
BUILDAH_ISOLATION=chroot STORAGE_DRIVER=vfs
Nope, that doesn't work. Probably kiwi is not propagating env variables to buildah? Only thing which works for me is the nspawn isolation.
Well, they might be propagated but it still doesn't work.
There is still mock.new_chroot=0 at https://cbs.centos.org/koji/taginfo?tagID=3076
Yeah, because all the other image builds fail with it. So I guess we need all our build tags doubled to split between disk images and everything else.
The problem with new_chroot is that nspawn does not allow loop devices to work: https://github.com/rpm-software-management/mock/issues/1554
What about adding --new-chroot / --old-chroot to kiwi-build command? Similar to runroot options.
just revisiting open tickets : as it seems not directly tied to CentOS infra, can we just close ticket ? Happy to apply something once you'll have found the workaround upstream in koji/kiwi but nothing to do (yet) for us (centos infra) ?
Let me close it for now and you can reopen it once there will be proper support in upstream koji/kiwi to make it happen (or how to invoke it correctly on cbs.centos.org)
Metadata Update from @arrfab: - Issue close_status updated to: Insufficient Data - Issue status updated to: Closed (was: Open)
Log in to comment on this ticket.