ci: Add test cases for container fetching and loading

Message ID 20240805071755.19853-1-ubely@ilbers.de
State Superseded, archived
Headers show
Series ci: Add test cases for container fetching and loading | expand

Commit Message

Uladzimir Bely Aug. 5, 2024, 7:16 a.m. UTC
From: Jan Kiszka <jan.kiszka@siemens.com>

This plugs the two example recipes for loading container images into
VM-based testing. The test consists of running 'true' in the installed
alpine images.

Rather than enabling the ci user to do password-less sudo, this uses su
with the piped-in password. Another trick needed is to poll for the
images because loading is performed asynchronously.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
---
 .../recipes-core/images/isar-image-ci.bb      |  2 ++
 testsuite/citest.py                           | 24 +++++++++++++++++++
 2 files changed, 26 insertions(+)

This is a drop-in replacement of patch 4 from "[PATCH v4 0/5] Introduce
container fetcher and pre-loader" series:
- Fixed syntax errors (incorrectly escaped '\$')
- Fixed long lines in order to pass flake8

Comments

Jan Kiszka Aug. 5, 2024, 9:17 a.m. UTC | #1
On 05.08.24 09:16, Uladzimir Bely wrote:
> From: Jan Kiszka <jan.kiszka@siemens.com>
> 
> This plugs the two example recipes for loading container images into
> VM-based testing. The test consists of running 'true' in the installed
> alpine images.
> 
> Rather than enabling the ci user to do password-less sudo, this uses su
> with the piped-in password. Another trick needed is to poll for the
> images because loading is performed asynchronously.
> 
> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
> ---
>  .../recipes-core/images/isar-image-ci.bb      |  2 ++
>  testsuite/citest.py                           | 24 +++++++++++++++++++
>  2 files changed, 26 insertions(+)
> 
> This is a drop-in replacement of patch 4 from "[PATCH v4 0/5] Introduce
> container fetcher and pre-loader" series:
> - Fixed syntax errors (incorrectly escaped '\$')

IIRC, we do need the escape inside the shell (sh -c '...'). So, you
likely rather need to escape the escape character.

Jan

> - Fixed long lines in order to pass flake8
> 
> diff --git a/meta-test/recipes-core/images/isar-image-ci.bb b/meta-test/recipes-core/images/isar-image-ci.bb
> index e5d51e6e..9133da74 100644
> --- a/meta-test/recipes-core/images/isar-image-ci.bb
> +++ b/meta-test/recipes-core/images/isar-image-ci.bb
> @@ -16,6 +16,7 @@ IMAGE_INSTALL += "sshd-regen-keys"
>  
>  # qemuamd64-bookworm
>  WKS_FILE:qemuamd64:debian-bookworm ?= "multipart-efi.wks"
> +IMAGE_INSTALL:append:qemuamd64:debian-bookworm = " prebuilt-docker-img prebuilt-podman-img"
>  
>  # qemuamd64-bullseye
>  IMAGE_FSTYPES:append:qemuamd64:debian-bullseye ?= " cpio.gz tar.gz"
> @@ -51,3 +52,4 @@ IMAGER_INSTALL:append:qemuarm:debian-bookworm ?= " ${SYSTEMD_BOOTLOADER_INSTALL}
>  # qemuarm64-bookworm
>  IMAGE_FSTYPES:append:qemuarm64:debian-bookworm ?= " wic.xz"
>  IMAGER_INSTALL:append:qemuarm64:debian-bookworm ?= " ${GRUB_BOOTLOADER_INSTALL}"
> +IMAGE_INSTALL:append:qemuarm64:debian-bookworm = " prebuilt-docker-img prebuilt-podman-img"
> diff --git a/testsuite/citest.py b/testsuite/citest.py
> index 7064c1e4..4a248a49 100755
> --- a/testsuite/citest.py
> +++ b/testsuite/citest.py
> @@ -609,3 +609,27 @@ class VmBootTestFull(CIBaseTest):
>              image='isar-image-ci',
>              script='test_kernel_module.sh example_module',
>          )
> +
> +    def test_amd64_bookworm_prebuilt_containers(self):
> +        self.init()
> +        self.vm_start(
> +            'amd64', 'bookworm', image='isar-image-ci',
> +            cmd='echo root | su -c \'PATH=$PATH:/usr/sbin;'
> +                'for n in $(seq 30);'
> +                '  do docker images | grep -q alpine && break; sleep 10; done;'
> +                'docker run --rm quay.io/libpod/alpine:3.10.2 true && '
> +                'for n in $(seq 30);'
> +                '  do podman images | grep -q alpine && break; sleep 10; done;'
> +                'podman run --rm quay.io/libpod/alpine:latest true\'')
> +
> +    def test_arm64_bookworm_prebuilt_containers(self):
> +        self.init()
> +        self.vm_start(
> +            'arm64', 'bookworm', image='isar-image-ci',
> +            cmd='echo root | su -c \'PATH=$PATH:/usr/sbin;'
> +                'for n in $(seq 30);'
> +                '  do docker images | grep -q alpine && break; sleep 10; done;'
> +                'docker run --rm quay.io/libpod/alpine:3.10.2 true && '
> +                'for n in $(seq 30);'
> +                '  do podman images | grep -q alpine && break; sleep 10; done;'
> +                'podman run --rm quay.io/libpod/alpine:latest true\'')
Uladzimir Bely Aug. 5, 2024, 9:40 a.m. UTC | #2
On Mon, 2024-08-05 at 11:17 +0200, Jan Kiszka wrote:
> On 05.08.24 09:16, Uladzimir Bely wrote:
> > From: Jan Kiszka <jan.kiszka@siemens.com>
> > 
> > This plugs the two example recipes for loading container images
> > into
> > VM-based testing. The test consists of running 'true' in the
> > installed
> > alpine images.
> > 
> > Rather than enabling the ci user to do password-less sudo, this
> > uses su
> > with the piped-in password. Another trick needed is to poll for the
> > images because loading is performed asynchronously.
> > 
> > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> > Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
> > ---
> >  .../recipes-core/images/isar-image-ci.bb      |  2 ++
> >  testsuite/citest.py                           | 24
> > +++++++++++++++++++
> >  2 files changed, 26 insertions(+)
> > 
> > This is a drop-in replacement of patch 4 from "[PATCH v4 0/5]
> > Introduce
> > container fetcher and pre-loader" series:
> > - Fixed syntax errors (incorrectly escaped '\$')
> 
> IIRC, we do need the escape inside the shell (sh -c '...'). So, you
> likely rather need to escape the escape character.
> 
> Jan
> 
> > 

I just tried to make a simple check:

```
$ su -c 'for i in $(seq 3); do echo $i; done'
Password: 
1
2
3

$ su -c 'for i in \$(seq 3); do echo $i; done'
Password: 
bash: -c: line 1: syntax error near unexpected token `('
bash: -c: line 1: `for i in \$(seq 3); do echo $i; done'

$ su -c 'for i in \\$(seq 3); do echo $i; done'
Password: 
\1
2
3
```

We are likely don't need escaping at all.

Anyway, we could just convert the tests from "cmd=<long_command"
to "script=test_prebuild_container.sh" and have test logic in a human-
readable form.

> > - Fixed long lines in order to pass flake8
> > 
> > diff --git a/meta-test/recipes-core/images/isar-image-ci.bb b/meta-
> > test/recipes-core/images/isar-image-ci.bb
> > index e5d51e6e..9133da74 100644
> > --- a/meta-test/recipes-core/images/isar-image-ci.bb
> > +++ b/meta-test/recipes-core/images/isar-image-ci.bb
> > @@ -16,6 +16,7 @@ IMAGE_INSTALL += "sshd-regen-keys"
> >  
> >  # qemuamd64-bookworm
> >  WKS_FILE:qemuamd64:debian-bookworm ?= "multipart-efi.wks"
> > +IMAGE_INSTALL:append:qemuamd64:debian-bookworm = " prebuilt-
> > docker-img prebuilt-podman-img"
> >  
> >  # qemuamd64-bullseye
> >  IMAGE_FSTYPES:append:qemuamd64:debian-bullseye ?= " cpio.gz
> > tar.gz"
> > @@ -51,3 +52,4 @@ IMAGER_INSTALL:append:qemuarm:debian-bookworm ?=
> > " ${SYSTEMD_BOOTLOADER_INSTALL}
> >  # qemuarm64-bookworm
> >  IMAGE_FSTYPES:append:qemuarm64:debian-bookworm ?= " wic.xz"
> >  IMAGER_INSTALL:append:qemuarm64:debian-bookworm ?= "
> > ${GRUB_BOOTLOADER_INSTALL}"
> > +IMAGE_INSTALL:append:qemuarm64:debian-bookworm = " prebuilt-
> > docker-img prebuilt-podman-img"
> > diff --git a/testsuite/citest.py b/testsuite/citest.py
> > index 7064c1e4..4a248a49 100755
> > --- a/testsuite/citest.py
> > +++ b/testsuite/citest.py
> > @@ -609,3 +609,27 @@ class VmBootTestFull(CIBaseTest):
> >              image='isar-image-ci',
> >              script='test_kernel_module.sh example_module',
> >          )
> > +
> > +    def test_amd64_bookworm_prebuilt_containers(self):
> > +        self.init()
> > +        self.vm_start(
> > +            'amd64', 'bookworm', image='isar-image-ci',
> > +            cmd='echo root | su -c \'PATH=$PATH:/usr/sbin;'
> > +                'for n in $(seq 30);'
> > +                '  do docker images | grep -q alpine && break;
> > sleep 10; done;'
> > +                'docker run --rm quay.io/libpod/alpine:3.10.2 true
> > && '
> > +                'for n in $(seq 30);'
> > +                '  do podman images | grep -q alpine && break;
> > sleep 10; done;'
> > +                'podman run --rm quay.io/libpod/alpine:latest
> > true\'')
> > +
> > +    def test_arm64_bookworm_prebuilt_containers(self):
> > +        self.init()
> > +        self.vm_start(
> > +            'arm64', 'bookworm', image='isar-image-ci',
> > +            cmd='echo root | su -c \'PATH=$PATH:/usr/sbin;'
> > +                'for n in $(seq 30);'
> > +                '  do docker images | grep -q alpine && break;
> > sleep 10; done;'
> > +                'docker run --rm quay.io/libpod/alpine:3.10.2 true
> > && '
> > +                'for n in $(seq 30);'
> > +                '  do podman images | grep -q alpine && break;
> > sleep 10; done;'
> > +                'podman run --rm quay.io/libpod/alpine:latest
> > true\'')
>
Jan Kiszka Aug. 5, 2024, 10:43 a.m. UTC | #3
On 05.08.24 11:40, Uladzimir Bely wrote:
> On Mon, 2024-08-05 at 11:17 +0200, Jan Kiszka wrote:
>> On 05.08.24 09:16, Uladzimir Bely wrote:
>>> From: Jan Kiszka <jan.kiszka@siemens.com>
>>>
>>> This plugs the two example recipes for loading container images
>>> into
>>> VM-based testing. The test consists of running 'true' in the
>>> installed
>>> alpine images.
>>>
>>> Rather than enabling the ci user to do password-less sudo, this
>>> uses su
>>> with the piped-in password. Another trick needed is to poll for the
>>> images because loading is performed asynchronously.
>>>
>>> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
>>> Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
>>> ---
>>>  .../recipes-core/images/isar-image-ci.bb      |  2 ++
>>>  testsuite/citest.py                           | 24
>>> +++++++++++++++++++
>>>  2 files changed, 26 insertions(+)
>>>
>>> This is a drop-in replacement of patch 4 from "[PATCH v4 0/5]
>>> Introduce
>>> container fetcher and pre-loader" series:
>>> - Fixed syntax errors (incorrectly escaped '\$')
>>
>> IIRC, we do need the escape inside the shell (sh -c '...'). So, you
>> likely rather need to escape the escape character.
>>
>> Jan
>>
>>>
> 
> I just tried to make a simple check:
> 
> ```
> $ su -c 'for i in $(seq 3); do echo $i; done'
> Password: 
> 1
> 2
> 3
> 
> $ su -c 'for i in \$(seq 3); do echo $i; done'
> Password: 
> bash: -c: line 1: syntax error near unexpected token `('
> bash: -c: line 1: `for i in \$(seq 3); do echo $i; done'
> 
> $ su -c 'for i in \\$(seq 3); do echo $i; done'
> Password: 
> \1
> 2
> 3
> ```
> 
> We are likely don't need escaping at all.

Interesting - anyway, if this sequence is not properly resolved, the
test will fail. And I assume you had it running successfully, so we must
be fine.

> 
> Anyway, we could just convert the tests from "cmd=<long_command"
> to "script=test_prebuild_container.sh" and have test logic in a human-
> readable form.
> 

Also fine with me.

Jan
Uladzimir Bely Aug. 5, 2024, 10:51 a.m. UTC | #4
On Mon, 2024-08-05 at 12:43 +0200, Jan Kiszka wrote:
> On 05.08.24 11:40, Uladzimir Bely wrote:
> > On Mon, 2024-08-05 at 11:17 +0200, Jan Kiszka wrote:
> > > On 05.08.24 09:16, Uladzimir Bely wrote:
> > > > From: Jan Kiszka <jan.kiszka@siemens.com>
> > > > 
> > > > This plugs the two example recipes for loading container images
> > > > into
> > > > VM-based testing. The test consists of running 'true' in the
> > > > installed
> > > > alpine images.
> > > > 
> > > > Rather than enabling the ci user to do password-less sudo, this
> > > > uses su
> > > > with the piped-in password. Another trick needed is to poll for
> > > > the
> > > > images because loading is performed asynchronously.
> > > > 
> > > > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> > > > Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
> > > > ---
> > > >  .../recipes-core/images/isar-image-ci.bb      |  2 ++
> > > >  testsuite/citest.py                           | 24
> > > > +++++++++++++++++++
> > > >  2 files changed, 26 insertions(+)
> > > > 
> > > > This is a drop-in replacement of patch 4 from "[PATCH v4 0/5]
> > > > Introduce
> > > > container fetcher and pre-loader" series:
> > > > - Fixed syntax errors (incorrectly escaped '\$')
> > > 
> > > IIRC, we do need the escape inside the shell (sh -c '...'). So,
> > > you
> > > likely rather need to escape the escape character.
> > > 
> > > Jan
> > > 
> > > > 
> > 
> > I just tried to make a simple check:
> > 
> > ```
> > $ su -c 'for i in $(seq 3); do echo $i; done'
> > Password: 
> > 1
> > 2
> > 3
> > 
> > $ su -c 'for i in \$(seq 3); do echo $i; done'
> > Password: 
> > bash: -c: line 1: syntax error near unexpected token `('
> > bash: -c: line 1: `for i in \$(seq 3); do echo $i; done'
> > 
> > $ su -c 'for i in \\$(seq 3); do echo $i; done'
> > Password: 
> > \1
> > 2
> > 3
> > ```
> > 
> > We are likely don't need escaping at all.
> 
> Interesting - anyway, if this sequence is not properly resolved, the
> test will fail. And I assume you had it running successfully, so we
> must
> be fine.
> 
> > 
> > Anyway, we could just convert the tests from "cmd=<long_command"
> > to "script=test_prebuild_container.sh" and have test logic in a
> > human-
> > readable form.
> > 
> 
> Also fine with me.
> 
> Jan
> 

OK, I've already prepared the script internally and will check in CI
with it.
Uladzimir Bely Aug. 6, 2024, 4:48 a.m. UTC | #5
On Mon, 2024-08-05 at 13:51 +0300, Uladzimir Bely wrote:
> On Mon, 2024-08-05 at 12:43 +0200, Jan Kiszka wrote:
> > On 05.08.24 11:40, Uladzimir Bely wrote:
> > > On Mon, 2024-08-05 at 11:17 +0200, Jan Kiszka wrote:
> > > > On 05.08.24 09:16, Uladzimir Bely wrote:
> > > > > From: Jan Kiszka <jan.kiszka@siemens.com>
> > > > > 
> > > > > This plugs the two example recipes for loading container
> > > > > images
> > > > > into
> > > > > VM-based testing. The test consists of running 'true' in the
> > > > > installed
> > > > > alpine images.
> > > > > 
> > > > > Rather than enabling the ci user to do password-less sudo,
> > > > > this
> > > > > uses su
> > > > > with the piped-in password. Another trick needed is to poll
> > > > > for
> > > > > the
> > > > > images because loading is performed asynchronously.
> > > > > 
> > > > > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> > > > > Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
> > > > > ---
> > > > >  .../recipes-core/images/isar-image-ci.bb      |  2 ++
> > > > >  testsuite/citest.py                           | 24
> > > > > +++++++++++++++++++
> > > > >  2 files changed, 26 insertions(+)
> > > > > 
> > > > > This is a drop-in replacement of patch 4 from "[PATCH v4 0/5]
> > > > > Introduce
> > > > > container fetcher and pre-loader" series:
> > > > > - Fixed syntax errors (incorrectly escaped '\$')
> > > > 
> > > > IIRC, we do need the escape inside the shell (sh -c '...'). So,
> > > > you
> > > > likely rather need to escape the escape character.
> > > > 
> > > > Jan
> > > > 
> > > > > 
> > > 
> > > I just tried to make a simple check:
> > > 
> > > ```
> > > $ su -c 'for i in $(seq 3); do echo $i; done'
> > > Password: 
> > > 1
> > > 2
> > > 3
> > > 
> > > $ su -c 'for i in \$(seq 3); do echo $i; done'
> > > Password: 
> > > bash: -c: line 1: syntax error near unexpected token `('
> > > bash: -c: line 1: `for i in \$(seq 3); do echo $i; done'
> > > 
> > > $ su -c 'for i in \\$(seq 3); do echo $i; done'
> > > Password: 
> > > \1
> > > 2
> > > 3
> > > ```
> > > 
> > > We are likely don't need escaping at all.
> > 
> > Interesting - anyway, if this sequence is not properly resolved,
> > the
> > test will fail. And I assume you had it running successfully, so we
> > must
> > be fine.
> > 
> > > 
> > > Anyway, we could just convert the tests from "cmd=<long_command"
> > > to "script=test_prebuild_container.sh" and have test logic in a
> > > human-
> > > readable form.
> > > 
> > 
> > Also fine with me.
> > 
> > Jan
> > 
> 
> OK, I've already prepared the script internally and will check in CI
> with it.
> 

... and still having problems with running commands inside arm64
container.

I manually run (with same command-line as CI does) qemuamd64 and
qemuarm64 images.

Running prebuilt container in amd64 machine works well:

```
root@isar:~# docker images
REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
quay.io/libpod/alpine   3.10.2    961769676411   4 years ago   5.58MB
root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
[   61.233873] docker0: port 1(veth1c2b6f9) entered blocking state
[   61.234280] docker0: port 1(veth1c2b6f9) entered disabled state
[   61.240243] device veth1c2b6f9 entered promiscuous mode
[   62.650328] eth0: renamed from veth2aff680
[   62.664713] IPv6: ADDRCONF(NETDEV_CHANGE): veth1c2b6f9: link becomes
ready
[   62.665407] docker0: port 1(veth1c2b6f9) entered blocking state
[   62.665656] docker0: port 1(veth1c2b6f9) entered forwarding state
[   62.666394] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes
ready
[   63.220542] docker0: port 1(veth1c2b6f9) entered disabled state
[   63.229530] veth2aff680: renamed from eth0
[   63.308290] docker0: port 1(veth1c2b6f9) entered disabled state
[   63.311282] device veth1c2b6f9 left promiscuous mode
[   63.311507] docker0: port 1(veth1c2b6f9) entered disabled state
root@isar:~# echo $?
0
root@isar:~# podman images
REPOSITORY             TAG         IMAGE ID      CREATED      SIZE
quay.io/libpod/alpine  latest      961769676411  4 years ago  5.85 MB
root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
[   78.274955] cni-podman0: port 1(vethf6fde03e) entered blocking state
[   78.275225] cni-podman0: port 1(vethf6fde03e) entered disabled state
[   78.277667] device vethf6fde03e entered promiscuous mode
[   78.626628] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[   78.627038] IPv6: ADDRCONF(NETDEV_CHANGE): vethf6fde03e: link
becomes ready
[   78.627313] cni-podman0: port 1(vethf6fde03e) entered blocking state
[   78.627513] cni-podman0: port 1(vethf6fde03e) entered forwarding
state
[   79.690462] audit: type=1400 audit(1722919083.116:6):
apparmor="STATUS" operation="profile_load" profile="unconfined"
name="containers-default-0.50.1" pid=750 comm="apparmor_parser"
[   80.574314] cni-podman0: port 1(vethf6fde03e) entered disabled state
[   80.575874] device vethf6fde03e left promiscuous mode
[   80.576060] cni-podman0: port 1(vethf6fde03e) entered disabled state
root@isar:~# echo $?
0
```

The same under arm64 fails:

```
root@isar:~# docker images
REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
quay.io/libpod/alpine   3.10.2    915beeae4675   4 years ago   5.33MB
root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
[  407.689016] docker0: port 1(veth81a2857) entered blocking state
[  407.689231] docker0: port 1(veth81a2857) entered disabled state
[  407.698637] device veth81a2857 entered promiscuous mode
[  410.003030] eth0: renamed from vethbe8a124
[  410.026357] IPv6: ADDRCONF(NETDEV_CHANGE): veth81a2857: link becomes
ready
[  410.026727] docker0: port 1(veth81a2857) entered blocking state
[  410.026872] docker0: port 1(veth81a2857) entered forwarding state
[  410.767475] docker0: port 1(veth81a2857) entered disabled state
[  410.788277] vethbe8a124: renamed from eth0
[  410.941958] docker0: port 1(veth81a2857) entered disabled state
[  410.944534] device veth81a2857 left promiscuous mode
[  410.944676] docker0: port 1(veth81a2857) entered disabled state
docker: Error response from daemon: failed to create shim task: OCI
runtime create failed: runc create failed: unable to start container
process: exec: "true": executable file not found in $PATH: unknown.
root@isar:~# echo $?
127
root@isar:~# podman images
REPOSITORY             TAG         IMAGE ID      CREATED      SIZE
quay.io/libpod/alpine  latest      915beeae4675  4 years ago  5.59 MB
root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
[  423.567388] cni-podman0: port 1(veth29135974) entered blocking state
[  423.567593] cni-podman0: port 1(veth29135974) entered disabled state
[  423.569719] device veth29135974 entered promiscuous mode
[  423.754420] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[  423.754765] IPv6: ADDRCONF(NETDEV_CHANGE): veth29135974: link
becomes ready
[  423.755036] cni-podman0: port 1(veth29135974) entered blocking state
[  423.755183] cni-podman0: port 1(veth29135974) entered forwarding
state
[  426.090252] cni-podman0: port 1(veth29135974) entered disabled state
[  426.098292] device veth29135974 left promiscuous mode
[  426.098455] cni-podman0: port 1(veth29135974) entered disabled state
Error: runc: runc create failed: unable to start container process:
exec: "true": executable file not found in $PATH: OCI runtime attempted
to invoke a command that was not found
root@isar:~# echo $?
127
```

At first glance this looks like arm64 images are not functional.
Continue debugging.
Uladzimir Bely Aug. 6, 2024, 9:48 a.m. UTC | #6
On Tue, 2024-08-06 at 07:48 +0300, Uladzimir Bely wrote:
> On Mon, 2024-08-05 at 13:51 +0300, Uladzimir Bely wrote:
> > On Mon, 2024-08-05 at 12:43 +0200, Jan Kiszka wrote:
> > > On 05.08.24 11:40, Uladzimir Bely wrote:
> > > > On Mon, 2024-08-05 at 11:17 +0200, Jan Kiszka wrote:
> > > > > On 05.08.24 09:16, Uladzimir Bely wrote:
> > > > > > From: Jan Kiszka <jan.kiszka@siemens.com>
> > > > > > 
> > > > > > This plugs the two example recipes for loading container
> > > > > > images
> > > > > > into
> > > > > > VM-based testing. The test consists of running 'true' in
> > > > > > the
> > > > > > installed
> > > > > > alpine images.
> > > > > > 
> > > > > > Rather than enabling the ci user to do password-less sudo,
> > > > > > this
> > > > > > uses su
> > > > > > with the piped-in password. Another trick needed is to poll
> > > > > > for
> > > > > > the
> > > > > > images because loading is performed asynchronously.
> > > > > > 
> > > > > > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> > > > > > Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
> > > > > > ---
> > > > > >  .../recipes-core/images/isar-image-ci.bb      |  2 ++
> > > > > >  testsuite/citest.py                           | 24
> > > > > > +++++++++++++++++++
> > > > > >  2 files changed, 26 insertions(+)
> > > > > > 
> > > > > > This is a drop-in replacement of patch 4 from "[PATCH v4
> > > > > > 0/5]
> > > > > > Introduce
> > > > > > container fetcher and pre-loader" series:
> > > > > > - Fixed syntax errors (incorrectly escaped '\$')
> > > > > 
> > > > > IIRC, we do need the escape inside the shell (sh -c '...').
> > > > > So,
> > > > > you
> > > > > likely rather need to escape the escape character.
> > > > > 
> > > > > Jan
> > > > > 
> > > > > > 
> > > > 
> > > > I just tried to make a simple check:
> > > > 
> > > > ```
> > > > $ su -c 'for i in $(seq 3); do echo $i; done'
> > > > Password: 
> > > > 1
> > > > 2
> > > > 3
> > > > 
> > > > $ su -c 'for i in \$(seq 3); do echo $i; done'
> > > > Password: 
> > > > bash: -c: line 1: syntax error near unexpected token `('
> > > > bash: -c: line 1: `for i in \$(seq 3); do echo $i; done'
> > > > 
> > > > $ su -c 'for i in \\$(seq 3); do echo $i; done'
> > > > Password: 
> > > > \1
> > > > 2
> > > > 3
> > > > ```
> > > > 
> > > > We are likely don't need escaping at all.
> > > 
> > > Interesting - anyway, if this sequence is not properly resolved,
> > > the
> > > test will fail. And I assume you had it running successfully, so
> > > we
> > > must
> > > be fine.
> > > 
> > > > 
> > > > Anyway, we could just convert the tests from
> > > > "cmd=<long_command"
> > > > to "script=test_prebuild_container.sh" and have test logic in a
> > > > human-
> > > > readable form.
> > > > 
> > > 
> > > Also fine with me.
> > > 
> > > Jan
> > > 
> > 
> > OK, I've already prepared the script internally and will check in
> > CI
> > with it.
> > 
> 
> ... and still having problems with running commands inside arm64
> container.
> 
> I manually run (with same command-line as CI does) qemuamd64 and
> qemuarm64 images.
> 
> Running prebuilt container in amd64 machine works well:
> 
> ```
> root@isar:~# docker images
> REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
> quay.io/libpod/alpine   3.10.2    961769676411   4 years ago   5.58MB
> root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> [   61.233873] docker0: port 1(veth1c2b6f9) entered blocking state
> [   61.234280] docker0: port 1(veth1c2b6f9) entered disabled state
> [   61.240243] device veth1c2b6f9 entered promiscuous mode
> [   62.650328] eth0: renamed from veth2aff680
> [   62.664713] IPv6: ADDRCONF(NETDEV_CHANGE): veth1c2b6f9: link
> becomes
> ready
> [   62.665407] docker0: port 1(veth1c2b6f9) entered blocking state
> [   62.665656] docker0: port 1(veth1c2b6f9) entered forwarding state
> [   62.666394] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes
> ready
> [   63.220542] docker0: port 1(veth1c2b6f9) entered disabled state
> [   63.229530] veth2aff680: renamed from eth0
> [   63.308290] docker0: port 1(veth1c2b6f9) entered disabled state
> [   63.311282] device veth1c2b6f9 left promiscuous mode
> [   63.311507] docker0: port 1(veth1c2b6f9) entered disabled state
> root@isar:~# echo $?
> 0
> root@isar:~# podman images
> REPOSITORY             TAG         IMAGE ID      CREATED      SIZE
> quay.io/libpod/alpine  latest      961769676411  4 years ago  5.85 MB
> root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
> [   78.274955] cni-podman0: port 1(vethf6fde03e) entered blocking
> state
> [   78.275225] cni-podman0: port 1(vethf6fde03e) entered disabled
> state
> [   78.277667] device vethf6fde03e entered promiscuous mode
> [   78.626628] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes
> ready
> [   78.627038] IPv6: ADDRCONF(NETDEV_CHANGE): vethf6fde03e: link
> becomes ready
> [   78.627313] cni-podman0: port 1(vethf6fde03e) entered blocking
> state
> [   78.627513] cni-podman0: port 1(vethf6fde03e) entered forwarding
> state
> [   79.690462] audit: type=1400 audit(1722919083.116:6):
> apparmor="STATUS" operation="profile_load" profile="unconfined"
> name="containers-default-0.50.1" pid=750 comm="apparmor_parser"
> [   80.574314] cni-podman0: port 1(vethf6fde03e) entered disabled
> state
> [   80.575874] device vethf6fde03e left promiscuous mode
> [   80.576060] cni-podman0: port 1(vethf6fde03e) entered disabled
> state
> root@isar:~# echo $?
> 0
> ```
> 
> The same under arm64 fails:
> 
> ```
> root@isar:~# docker images
> REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
> quay.io/libpod/alpine   3.10.2    915beeae4675   4 years ago   5.33MB
> root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> [  407.689016] docker0: port 1(veth81a2857) entered blocking state
> [  407.689231] docker0: port 1(veth81a2857) entered disabled state
> [  407.698637] device veth81a2857 entered promiscuous mode
> [  410.003030] eth0: renamed from vethbe8a124
> [  410.026357] IPv6: ADDRCONF(NETDEV_CHANGE): veth81a2857: link
> becomes
> ready
> [  410.026727] docker0: port 1(veth81a2857) entered blocking state
> [  410.026872] docker0: port 1(veth81a2857) entered forwarding state
> [  410.767475] docker0: port 1(veth81a2857) entered disabled state
> [  410.788277] vethbe8a124: renamed from eth0
> [  410.941958] docker0: port 1(veth81a2857) entered disabled state
> [  410.944534] device veth81a2857 left promiscuous mode
> [  410.944676] docker0: port 1(veth81a2857) entered disabled state
> docker: Error response from daemon: failed to create shim task: OCI
> runtime create failed: runc create failed: unable to start container
> process: exec: "true": executable file not found in $PATH: unknown.
> root@isar:~# echo $?
> 127
> root@isar:~# podman images
> REPOSITORY             TAG         IMAGE ID      CREATED      SIZE
> quay.io/libpod/alpine  latest      915beeae4675  4 years ago  5.59 MB
> root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
> [  423.567388] cni-podman0: port 1(veth29135974) entered blocking
> state
> [  423.567593] cni-podman0: port 1(veth29135974) entered disabled
> state
> [  423.569719] device veth29135974 entered promiscuous mode
> [  423.754420] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes
> ready
> [  423.754765] IPv6: ADDRCONF(NETDEV_CHANGE): veth29135974: link
> becomes ready
> [  423.755036] cni-podman0: port 1(veth29135974) entered blocking
> state
> [  423.755183] cni-podman0: port 1(veth29135974) entered forwarding
> state
> [  426.090252] cni-podman0: port 1(veth29135974) entered disabled
> state
> [  426.098292] device veth29135974 left promiscuous mode
> [  426.098455] cni-podman0: port 1(veth29135974) entered disabled
> state
> Error: runc: runc create failed: unable to start container process:
> exec: "true": executable file not found in $PATH: OCI runtime
> attempted
> to invoke a command that was not found
> root@isar:~# echo $?
> 127
> ```
> 
> At first glance this looks like arm64 images are not functional.
> Continue debugging.
> 

After some debugging I can see that something makes docker prebuilt
image inside qemu broken. But removing it from and loading to docker
engine again helps:


```
root@isar:~# docker images
REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
quay.io/libpod/alpine   3.10.2    915beeae4675   4 years ago   5.33MB

root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
[  902.770874] docker0: port 1(veth8275b2c) entered blocking state
[  902.771066] docker0: port 1(veth8275b2c) entered disabled state
[  902.777051] device veth8275b2c entered promiscuous mode
[  904.813519] eth0: renamed from veth2f2256f
[  904.830269] IPv6: ADDRCONF(NETDEV_CHANGE): veth8275b2c: link becomes
ready
[  904.830857] docker0: port 1(veth8275b2c) entered blocking state
[  904.830997] docker0: port 1(veth8275b2c) entered forwarding state
[  904.831407] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes
ready
[  905.372753] docker0: port 1(veth8275b2c) entered disabled state
[  905.385163] veth2f2256f: renamed from eth0
[  905.487707] docker0: port 1(veth8275b2c) entered disabled state
[  905.491396] device veth8275b2c left promiscuous mode
[  905.491533] docker0: port 1(veth8275b2c) entered disabled state
docker: Error response from daemon: failed to create shim task: OCI
runtime create failed: runc create failed: unable to start container
process: exec: "true": executable file not found in $PATH: unknown.
ERRO[0003] error waiting for container: context canceled 

root@isar:~# echo $?
127

root@isar:~# docker image rm 915beeae4675
Untagged: quay.io/libpod/alpine:3.10.2
Deleted:
sha256:915beeae46751fc564998c79e73a1026542e945ca4f73dc841d09ccc6c2c0672
Deleted:
sha256:5e0d8111135538b8a86ce5fc969849efce16c455fd016bb3dc53131bcedc4da5

root@isar:~# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE

root@isar:~# pzstd -c -d /usr/share/prebuilt-docker-
img/images/quay.io.libpod.alpine\:3.10.2.zst | docker load
/usr/share/prebuilt-docker-img/images/quay.io.libpod.alpine:3.10.2.zst:
5598720 bytes 
5e0d81111355: Loading layer   5.59MB/5.59MB
Loaded image: quay.io/libpod/alpine:3.10.2

root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
[ 1023.800568] docker0: port 1(veth3eb45d3) entered blocking state
[ 1023.800790] docker0: port 1(veth3eb45d3) entered disabled state
[ 1023.805585] device veth3eb45d3 entered promiscuous mode
[ 1025.295999] eth0: renamed from veth7e4183e
[ 1025.310388] IPv6: ADDRCONF(NETDEV_CHANGE): veth3eb45d3: link becomes
ready
[ 1025.310681] docker0: port 1(veth3eb45d3) entered blocking state
[ 1025.310801] docker0: port 1(veth3eb45d3) entered forwarding state
[ 1025.979813] docker0: port 1(veth3eb45d3) entered disabled state
[ 1025.990858] veth7e4183e: renamed from eth0
[ 1026.087161] docker0: port 1(veth3eb45d3) entered disabled state
[ 1026.088367] device veth3eb45d3 left promiscuous mode
[ 1026.088471] docker0: port 1(veth3eb45d3) entered disabled state

root@isar:~# echo $?
0
```

This looks strange. Nothing changed (image hash is the same), but the
second run works well. After rebooting qemu machine it still works.

Podman prebuilt image looks unaffected - it works from the beginning.
Jan Kiszka Aug. 6, 2024, 10:46 a.m. UTC | #7
On 06.08.24 11:48, Uladzimir Bely wrote:
> On Tue, 2024-08-06 at 07:48 +0300, Uladzimir Bely wrote:
>> On Mon, 2024-08-05 at 13:51 +0300, Uladzimir Bely wrote:
>>> On Mon, 2024-08-05 at 12:43 +0200, Jan Kiszka wrote:
>>>> On 05.08.24 11:40, Uladzimir Bely wrote:
>>>>> On Mon, 2024-08-05 at 11:17 +0200, Jan Kiszka wrote:
>>>>>> On 05.08.24 09:16, Uladzimir Bely wrote:
>>>>>>> From: Jan Kiszka <jan.kiszka@siemens.com>
>>>>>>>
>>>>>>> This plugs the two example recipes for loading container
>>>>>>> images
>>>>>>> into
>>>>>>> VM-based testing. The test consists of running 'true' in
>>>>>>> the
>>>>>>> installed
>>>>>>> alpine images.
>>>>>>>
>>>>>>> Rather than enabling the ci user to do password-less sudo,
>>>>>>> this
>>>>>>> uses su
>>>>>>> with the piped-in password. Another trick needed is to poll
>>>>>>> for
>>>>>>> the
>>>>>>> images because loading is performed asynchronously.
>>>>>>>
>>>>>>> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
>>>>>>> Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
>>>>>>> ---
>>>>>>>  .../recipes-core/images/isar-image-ci.bb      |  2 ++
>>>>>>>  testsuite/citest.py                           | 24
>>>>>>> +++++++++++++++++++
>>>>>>>  2 files changed, 26 insertions(+)
>>>>>>>
>>>>>>> This is a drop-in replacement of patch 4 from "[PATCH v4
>>>>>>> 0/5]
>>>>>>> Introduce
>>>>>>> container fetcher and pre-loader" series:
>>>>>>> - Fixed syntax errors (incorrectly escaped '\$')
>>>>>>
>>>>>> IIRC, we do need the escape inside the shell (sh -c '...').
>>>>>> So,
>>>>>> you
>>>>>> likely rather need to escape the escape character.
>>>>>>
>>>>>> Jan
>>>>>>
>>>>>>>
>>>>>
>>>>> I just tried to make a simple check:
>>>>>
>>>>> ```
>>>>> $ su -c 'for i in $(seq 3); do echo $i; done'
>>>>> Password: 
>>>>> 1
>>>>> 2
>>>>> 3
>>>>>
>>>>> $ su -c 'for i in \$(seq 3); do echo $i; done'
>>>>> Password: 
>>>>> bash: -c: line 1: syntax error near unexpected token `('
>>>>> bash: -c: line 1: `for i in \$(seq 3); do echo $i; done'
>>>>>
>>>>> $ su -c 'for i in \\$(seq 3); do echo $i; done'
>>>>> Password: 
>>>>> \1
>>>>> 2
>>>>> 3
>>>>> ```
>>>>>
>>>>> We are likely don't need escaping at all.
>>>>
>>>> Interesting - anyway, if this sequence is not properly resolved,
>>>> the
>>>> test will fail. And I assume you had it running successfully, so
>>>> we
>>>> must
>>>> be fine.
>>>>
>>>>>
>>>>> Anyway, we could just convert the tests from
>>>>> "cmd=<long_command"
>>>>> to "script=test_prebuild_container.sh" and have test logic in a
>>>>> human-
>>>>> readable form.
>>>>>
>>>>
>>>> Also fine with me.
>>>>
>>>> Jan
>>>>
>>>
>>> OK, I've already prepared the script internally and will check in
>>> CI
>>> with it.
>>>
>>
>> ... and still having problems with running commands inside arm64
>> container.
>>
>> I manually run (with same command-line as CI does) qemuamd64 and
>> qemuarm64 images.
>>
>> Running prebuilt container in amd64 machine works well:
>>
>> ```
>> root@isar:~# docker images
>> REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
>> quay.io/libpod/alpine   3.10.2    961769676411   4 years ago   5.58MB
>> root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
>> [   61.233873] docker0: port 1(veth1c2b6f9) entered blocking state
>> [   61.234280] docker0: port 1(veth1c2b6f9) entered disabled state
>> [   61.240243] device veth1c2b6f9 entered promiscuous mode
>> [   62.650328] eth0: renamed from veth2aff680
>> [   62.664713] IPv6: ADDRCONF(NETDEV_CHANGE): veth1c2b6f9: link
>> becomes
>> ready
>> [   62.665407] docker0: port 1(veth1c2b6f9) entered blocking state
>> [   62.665656] docker0: port 1(veth1c2b6f9) entered forwarding state
>> [   62.666394] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes
>> ready
>> [   63.220542] docker0: port 1(veth1c2b6f9) entered disabled state
>> [   63.229530] veth2aff680: renamed from eth0
>> [   63.308290] docker0: port 1(veth1c2b6f9) entered disabled state
>> [   63.311282] device veth1c2b6f9 left promiscuous mode
>> [   63.311507] docker0: port 1(veth1c2b6f9) entered disabled state
>> root@isar:~# echo $?
>> 0
>> root@isar:~# podman images
>> REPOSITORY             TAG         IMAGE ID      CREATED      SIZE
>> quay.io/libpod/alpine  latest      961769676411  4 years ago  5.85 MB
>> root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
>> [   78.274955] cni-podman0: port 1(vethf6fde03e) entered blocking
>> state
>> [   78.275225] cni-podman0: port 1(vethf6fde03e) entered disabled
>> state
>> [   78.277667] device vethf6fde03e entered promiscuous mode
>> [   78.626628] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes
>> ready
>> [   78.627038] IPv6: ADDRCONF(NETDEV_CHANGE): vethf6fde03e: link
>> becomes ready
>> [   78.627313] cni-podman0: port 1(vethf6fde03e) entered blocking
>> state
>> [   78.627513] cni-podman0: port 1(vethf6fde03e) entered forwarding
>> state
>> [   79.690462] audit: type=1400 audit(1722919083.116:6):
>> apparmor="STATUS" operation="profile_load" profile="unconfined"
>> name="containers-default-0.50.1" pid=750 comm="apparmor_parser"
>> [   80.574314] cni-podman0: port 1(vethf6fde03e) entered disabled
>> state
>> [   80.575874] device vethf6fde03e left promiscuous mode
>> [   80.576060] cni-podman0: port 1(vethf6fde03e) entered disabled
>> state
>> root@isar:~# echo $?
>> 0
>> ```
>>
>> The same under arm64 fails:
>>
>> ```
>> root@isar:~# docker images
>> REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
>> quay.io/libpod/alpine   3.10.2    915beeae4675   4 years ago   5.33MB
>> root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
>> [  407.689016] docker0: port 1(veth81a2857) entered blocking state
>> [  407.689231] docker0: port 1(veth81a2857) entered disabled state
>> [  407.698637] device veth81a2857 entered promiscuous mode
>> [  410.003030] eth0: renamed from vethbe8a124
>> [  410.026357] IPv6: ADDRCONF(NETDEV_CHANGE): veth81a2857: link
>> becomes
>> ready
>> [  410.026727] docker0: port 1(veth81a2857) entered blocking state
>> [  410.026872] docker0: port 1(veth81a2857) entered forwarding state
>> [  410.767475] docker0: port 1(veth81a2857) entered disabled state
>> [  410.788277] vethbe8a124: renamed from eth0
>> [  410.941958] docker0: port 1(veth81a2857) entered disabled state
>> [  410.944534] device veth81a2857 left promiscuous mode
>> [  410.944676] docker0: port 1(veth81a2857) entered disabled state
>> docker: Error response from daemon: failed to create shim task: OCI
>> runtime create failed: runc create failed: unable to start container
>> process: exec: "true": executable file not found in $PATH: unknown.
>> root@isar:~# echo $?
>> 127
>> root@isar:~# podman images
>> REPOSITORY             TAG         IMAGE ID      CREATED      SIZE
>> quay.io/libpod/alpine  latest      915beeae4675  4 years ago  5.59 MB
>> root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
>> [  423.567388] cni-podman0: port 1(veth29135974) entered blocking
>> state
>> [  423.567593] cni-podman0: port 1(veth29135974) entered disabled
>> state
>> [  423.569719] device veth29135974 entered promiscuous mode
>> [  423.754420] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes
>> ready
>> [  423.754765] IPv6: ADDRCONF(NETDEV_CHANGE): veth29135974: link
>> becomes ready
>> [  423.755036] cni-podman0: port 1(veth29135974) entered blocking
>> state
>> [  423.755183] cni-podman0: port 1(veth29135974) entered forwarding
>> state
>> [  426.090252] cni-podman0: port 1(veth29135974) entered disabled
>> state
>> [  426.098292] device veth29135974 left promiscuous mode
>> [  426.098455] cni-podman0: port 1(veth29135974) entered disabled
>> state
>> Error: runc: runc create failed: unable to start container process:
>> exec: "true": executable file not found in $PATH: OCI runtime
>> attempted
>> to invoke a command that was not found
>> root@isar:~# echo $?
>> 127
>> ```
>>
>> At first glance this looks like arm64 images are not functional.
>> Continue debugging.
>>
> 
> After some debugging I can see that something makes docker prebuilt
> image inside qemu broken. But removing it from and loading to docker
> engine again helps:
> 
> 
> ```
> root@isar:~# docker images
> REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
> quay.io/libpod/alpine   3.10.2    915beeae4675   4 years ago   5.33MB
> 
> root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> [  902.770874] docker0: port 1(veth8275b2c) entered blocking state
> [  902.771066] docker0: port 1(veth8275b2c) entered disabled state
> [  902.777051] device veth8275b2c entered promiscuous mode
> [  904.813519] eth0: renamed from veth2f2256f
> [  904.830269] IPv6: ADDRCONF(NETDEV_CHANGE): veth8275b2c: link becomes
> ready
> [  904.830857] docker0: port 1(veth8275b2c) entered blocking state
> [  904.830997] docker0: port 1(veth8275b2c) entered forwarding state
> [  904.831407] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes
> ready
> [  905.372753] docker0: port 1(veth8275b2c) entered disabled state
> [  905.385163] veth2f2256f: renamed from eth0
> [  905.487707] docker0: port 1(veth8275b2c) entered disabled state
> [  905.491396] device veth8275b2c left promiscuous mode
> [  905.491533] docker0: port 1(veth8275b2c) entered disabled state
> docker: Error response from daemon: failed to create shim task: OCI
> runtime create failed: runc create failed: unable to start container
> process: exec: "true": executable file not found in $PATH: unknown.
> ERRO[0003] error waiting for container: context canceled 
> 
> root@isar:~# echo $?
> 127
> 
> root@isar:~# docker image rm 915beeae4675
> Untagged: quay.io/libpod/alpine:3.10.2
> Deleted:
> sha256:915beeae46751fc564998c79e73a1026542e945ca4f73dc841d09ccc6c2c0672
> Deleted:
> sha256:5e0d8111135538b8a86ce5fc969849efce16c455fd016bb3dc53131bcedc4da5
> 
> root@isar:~# docker images
> REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
> 
> root@isar:~# pzstd -c -d /usr/share/prebuilt-docker-
> img/images/quay.io.libpod.alpine\:3.10.2.zst | docker load
> /usr/share/prebuilt-docker-img/images/quay.io.libpod.alpine:3.10.2.zst:
> 5598720 bytes 
> 5e0d81111355: Loading layer   5.59MB/5.59MB
> Loaded image: quay.io/libpod/alpine:3.10.2
> 
> root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> [ 1023.800568] docker0: port 1(veth3eb45d3) entered blocking state
> [ 1023.800790] docker0: port 1(veth3eb45d3) entered disabled state
> [ 1023.805585] device veth3eb45d3 entered promiscuous mode
> [ 1025.295999] eth0: renamed from veth7e4183e
> [ 1025.310388] IPv6: ADDRCONF(NETDEV_CHANGE): veth3eb45d3: link becomes
> ready
> [ 1025.310681] docker0: port 1(veth3eb45d3) entered blocking state
> [ 1025.310801] docker0: port 1(veth3eb45d3) entered forwarding state
> [ 1025.979813] docker0: port 1(veth3eb45d3) entered disabled state
> [ 1025.990858] veth7e4183e: renamed from eth0
> [ 1026.087161] docker0: port 1(veth3eb45d3) entered disabled state
> [ 1026.088367] device veth3eb45d3 left promiscuous mode
> [ 1026.088471] docker0: port 1(veth3eb45d3) entered disabled state
> 
> root@isar:~# echo $?
> 0
> ```
> 
> This looks strange. Nothing changed (image hash is the same), but the
> second run works well. After rebooting qemu machine it still works.
> 
> Podman prebuilt image looks unaffected - it works from the beginning.
> 

Strange, all that used to work. You manually reproduced this as well,
not only via the testsuite, right? Let me test again locally...

Jan
Uladzimir Bely Aug. 6, 2024, 10:54 a.m. UTC | #8
On Tue, 2024-08-06 at 12:46 +0200, Jan Kiszka wrote:
> On 06.08.24 11:48, Uladzimir Bely wrote:
> > On Tue, 2024-08-06 at 07:48 +0300, Uladzimir Bely wrote:
> > > On Mon, 2024-08-05 at 13:51 +0300, Uladzimir Bely wrote:
> > > > On Mon, 2024-08-05 at 12:43 +0200, Jan Kiszka wrote:
> > > > > On 05.08.24 11:40, Uladzimir Bely wrote:
> > > > > > On Mon, 2024-08-05 at 11:17 +0200, Jan Kiszka wrote:
> > > > > > > On 05.08.24 09:16, Uladzimir Bely wrote:
> > > > > > > > From: Jan Kiszka <jan.kiszka@siemens.com>
> > > > > > > > 
> > > > > > > > This plugs the two example recipes for loading
> > > > > > > > container
> > > > > > > > images
> > > > > > > > into
> > > > > > > > VM-based testing. The test consists of running 'true'
> > > > > > > > in
> > > > > > > > the
> > > > > > > > installed
> > > > > > > > alpine images.
> > > > > > > > 
> > > > > > > > Rather than enabling the ci user to do password-less
> > > > > > > > sudo,
> > > > > > > > this
> > > > > > > > uses su
> > > > > > > > with the piped-in password. Another trick needed is to
> > > > > > > > poll
> > > > > > > > for
> > > > > > > > the
> > > > > > > > images because loading is performed asynchronously.
> > > > > > > > 
> > > > > > > > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> > > > > > > > Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
> > > > > > > > ---
> > > > > > > >  .../recipes-core/images/isar-image-ci.bb      |  2 ++
> > > > > > > >  testsuite/citest.py                           | 24
> > > > > > > > +++++++++++++++++++
> > > > > > > >  2 files changed, 26 insertions(+)
> > > > > > > > 
> > > > > > > > This is a drop-in replacement of patch 4 from "[PATCH
> > > > > > > > v4
> > > > > > > > 0/5]
> > > > > > > > Introduce
> > > > > > > > container fetcher and pre-loader" series:
> > > > > > > > - Fixed syntax errors (incorrectly escaped '\$')
> > > > > > > 
> > > > > > > IIRC, we do need the escape inside the shell (sh -c
> > > > > > > '...').
> > > > > > > So,
> > > > > > > you
> > > > > > > likely rather need to escape the escape character.
> > > > > > > 
> > > > > > > Jan
> > > > > > > 
> > > > > > > > 
> > > > > > 
> > > > > > I just tried to make a simple check:
> > > > > > 
> > > > > > ```
> > > > > > $ su -c 'for i in $(seq 3); do echo $i; done'
> > > > > > Password: 
> > > > > > 1
> > > > > > 2
> > > > > > 3
> > > > > > 
> > > > > > $ su -c 'for i in \$(seq 3); do echo $i; done'
> > > > > > Password: 
> > > > > > bash: -c: line 1: syntax error near unexpected token `('
> > > > > > bash: -c: line 1: `for i in \$(seq 3); do echo $i; done'
> > > > > > 
> > > > > > $ su -c 'for i in \\$(seq 3); do echo $i; done'
> > > > > > Password: 
> > > > > > \1
> > > > > > 2
> > > > > > 3
> > > > > > ```
> > > > > > 
> > > > > > We are likely don't need escaping at all.
> > > > > 
> > > > > Interesting - anyway, if this sequence is not properly
> > > > > resolved,
> > > > > the
> > > > > test will fail. And I assume you had it running successfully,
> > > > > so
> > > > > we
> > > > > must
> > > > > be fine.
> > > > > 
> > > > > > 
> > > > > > Anyway, we could just convert the tests from
> > > > > > "cmd=<long_command"
> > > > > > to "script=test_prebuild_container.sh" and have test logic
> > > > > > in a
> > > > > > human-
> > > > > > readable form.
> > > > > > 
> > > > > 
> > > > > Also fine with me.
> > > > > 
> > > > > Jan
> > > > > 
> > > > 
> > > > OK, I've already prepared the script internally and will check
> > > > in
> > > > CI
> > > > with it.
> > > > 
> > > 
> > > ... and still having problems with running commands inside arm64
> > > container.
> > > 
> > > I manually run (with same command-line as CI does) qemuamd64 and
> > > qemuarm64 images.
> > > 
> > > Running prebuilt container in amd64 machine works well:
> > > 
> > > ```
> > > root@isar:~# docker images
> > > REPOSITORY              TAG       IMAGE ID       CREATED      
> > > SIZE
> > > quay.io/libpod/alpine   3.10.2    961769676411   4 years ago  
> > > 5.58MB
> > > root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> > > [   61.233873] docker0: port 1(veth1c2b6f9) entered blocking
> > > state
> > > [   61.234280] docker0: port 1(veth1c2b6f9) entered disabled
> > > state
> > > [   61.240243] device veth1c2b6f9 entered promiscuous mode
> > > [   62.650328] eth0: renamed from veth2aff680
> > > [   62.664713] IPv6: ADDRCONF(NETDEV_CHANGE): veth1c2b6f9: link
> > > becomes
> > > ready
> > > [   62.665407] docker0: port 1(veth1c2b6f9) entered blocking
> > > state
> > > [   62.665656] docker0: port 1(veth1c2b6f9) entered forwarding
> > > state
> > > [   62.666394] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link
> > > becomes
> > > ready
> > > [   63.220542] docker0: port 1(veth1c2b6f9) entered disabled
> > > state
> > > [   63.229530] veth2aff680: renamed from eth0
> > > [   63.308290] docker0: port 1(veth1c2b6f9) entered disabled
> > > state
> > > [   63.311282] device veth1c2b6f9 left promiscuous mode
> > > [   63.311507] docker0: port 1(veth1c2b6f9) entered disabled
> > > state
> > > root@isar:~# echo $?
> > > 0
> > > root@isar:~# podman images
> > > REPOSITORY             TAG         IMAGE ID      CREATED     
> > > SIZE
> > > quay.io/libpod/alpine  latest      961769676411  4 years ago 
> > > 5.85 MB
> > > root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
> > > [   78.274955] cni-podman0: port 1(vethf6fde03e) entered blocking
> > > state
> > > [   78.275225] cni-podman0: port 1(vethf6fde03e) entered disabled
> > > state
> > > [   78.277667] device vethf6fde03e entered promiscuous mode
> > > [   78.626628] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes
> > > ready
> > > [   78.627038] IPv6: ADDRCONF(NETDEV_CHANGE): vethf6fde03e: link
> > > becomes ready
> > > [   78.627313] cni-podman0: port 1(vethf6fde03e) entered blocking
> > > state
> > > [   78.627513] cni-podman0: port 1(vethf6fde03e) entered
> > > forwarding
> > > state
> > > [   79.690462] audit: type=1400 audit(1722919083.116:6):
> > > apparmor="STATUS" operation="profile_load" profile="unconfined"
> > > name="containers-default-0.50.1" pid=750 comm="apparmor_parser"
> > > [   80.574314] cni-podman0: port 1(vethf6fde03e) entered disabled
> > > state
> > > [   80.575874] device vethf6fde03e left promiscuous mode
> > > [   80.576060] cni-podman0: port 1(vethf6fde03e) entered disabled
> > > state
> > > root@isar:~# echo $?
> > > 0
> > > ```
> > > 
> > > The same under arm64 fails:
> > > 
> > > ```
> > > root@isar:~# docker images
> > > REPOSITORY              TAG       IMAGE ID       CREATED      
> > > SIZE
> > > quay.io/libpod/alpine   3.10.2    915beeae4675   4 years ago  
> > > 5.33MB
> > > root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> > > [  407.689016] docker0: port 1(veth81a2857) entered blocking
> > > state
> > > [  407.689231] docker0: port 1(veth81a2857) entered disabled
> > > state
> > > [  407.698637] device veth81a2857 entered promiscuous mode
> > > [  410.003030] eth0: renamed from vethbe8a124
> > > [  410.026357] IPv6: ADDRCONF(NETDEV_CHANGE): veth81a2857: link
> > > becomes
> > > ready
> > > [  410.026727] docker0: port 1(veth81a2857) entered blocking
> > > state
> > > [  410.026872] docker0: port 1(veth81a2857) entered forwarding
> > > state
> > > [  410.767475] docker0: port 1(veth81a2857) entered disabled
> > > state
> > > [  410.788277] vethbe8a124: renamed from eth0
> > > [  410.941958] docker0: port 1(veth81a2857) entered disabled
> > > state
> > > [  410.944534] device veth81a2857 left promiscuous mode
> > > [  410.944676] docker0: port 1(veth81a2857) entered disabled
> > > state
> > > docker: Error response from daemon: failed to create shim task:
> > > OCI
> > > runtime create failed: runc create failed: unable to start
> > > container
> > > process: exec: "true": executable file not found in $PATH:
> > > unknown.
> > > root@isar:~# echo $?
> > > 127
> > > root@isar:~# podman images
> > > REPOSITORY             TAG         IMAGE ID      CREATED     
> > > SIZE
> > > quay.io/libpod/alpine  latest      915beeae4675  4 years ago 
> > > 5.59 MB
> > > root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
> > > [  423.567388] cni-podman0: port 1(veth29135974) entered blocking
> > > state
> > > [  423.567593] cni-podman0: port 1(veth29135974) entered disabled
> > > state
> > > [  423.569719] device veth29135974 entered promiscuous mode
> > > [  423.754420] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes
> > > ready
> > > [  423.754765] IPv6: ADDRCONF(NETDEV_CHANGE): veth29135974: link
> > > becomes ready
> > > [  423.755036] cni-podman0: port 1(veth29135974) entered blocking
> > > state
> > > [  423.755183] cni-podman0: port 1(veth29135974) entered
> > > forwarding
> > > state
> > > [  426.090252] cni-podman0: port 1(veth29135974) entered disabled
> > > state
> > > [  426.098292] device veth29135974 left promiscuous mode
> > > [  426.098455] cni-podman0: port 1(veth29135974) entered disabled
> > > state
> > > Error: runc: runc create failed: unable to start container
> > > process:
> > > exec: "true": executable file not found in $PATH: OCI runtime
> > > attempted
> > > to invoke a command that was not found
> > > root@isar:~# echo $?
> > > 127
> > > ```
> > > 
> > > At first glance this looks like arm64 images are not functional.
> > > Continue debugging.
> > > 
> > 
> > After some debugging I can see that something makes docker prebuilt
> > image inside qemu broken. But removing it from and loading to
> > docker
> > engine again helps:
> > 
> > 
> > ```
> > root@isar:~# docker images
> > REPOSITORY              TAG       IMAGE ID       CREATED       SIZE
> > quay.io/libpod/alpine   3.10.2    915beeae4675   4 years ago  
> > 5.33MB
> > 
> > root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> > [  902.770874] docker0: port 1(veth8275b2c) entered blocking state
> > [  902.771066] docker0: port 1(veth8275b2c) entered disabled state
> > [  902.777051] device veth8275b2c entered promiscuous mode
> > [  904.813519] eth0: renamed from veth2f2256f
> > [  904.830269] IPv6: ADDRCONF(NETDEV_CHANGE): veth8275b2c: link
> > becomes
> > ready
> > [  904.830857] docker0: port 1(veth8275b2c) entered blocking state
> > [  904.830997] docker0: port 1(veth8275b2c) entered forwarding
> > state
> > [  904.831407] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes
> > ready
> > [  905.372753] docker0: port 1(veth8275b2c) entered disabled state
> > [  905.385163] veth2f2256f: renamed from eth0
> > [  905.487707] docker0: port 1(veth8275b2c) entered disabled state
> > [  905.491396] device veth8275b2c left promiscuous mode
> > [  905.491533] docker0: port 1(veth8275b2c) entered disabled state
> > docker: Error response from daemon: failed to create shim task: OCI
> > runtime create failed: runc create failed: unable to start
> > container
> > process: exec: "true": executable file not found in $PATH: unknown.
> > ERRO[0003] error waiting for container: context canceled 
> > 
> > root@isar:~# echo $?
> > 127
> > 
> > root@isar:~# docker image rm 915beeae4675
> > Untagged: quay.io/libpod/alpine:3.10.2
> > Deleted:
> > sha256:915beeae46751fc564998c79e73a1026542e945ca4f73dc841d09ccc6c2c
> > 0672
> > Deleted:
> > sha256:5e0d8111135538b8a86ce5fc969849efce16c455fd016bb3dc53131bcedc
> > 4da5
> > 
> > root@isar:~# docker images
> > REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
> > 
> > root@isar:~# pzstd -c -d /usr/share/prebuilt-docker-
> > img/images/quay.io.libpod.alpine\:3.10.2.zst | docker load
> > /usr/share/prebuilt-docker-
> > img/images/quay.io.libpod.alpine:3.10.2.zst:
> > 5598720 bytes 
> > 5e0d81111355: Loading layer   5.59MB/5.59MB
> > Loaded image: quay.io/libpod/alpine:3.10.2
> > 
> > root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> > [ 1023.800568] docker0: port 1(veth3eb45d3) entered blocking state
> > [ 1023.800790] docker0: port 1(veth3eb45d3) entered disabled state
> > [ 1023.805585] device veth3eb45d3 entered promiscuous mode
> > [ 1025.295999] eth0: renamed from veth7e4183e
> > [ 1025.310388] IPv6: ADDRCONF(NETDEV_CHANGE): veth3eb45d3: link
> > becomes
> > ready
> > [ 1025.310681] docker0: port 1(veth3eb45d3) entered blocking state
> > [ 1025.310801] docker0: port 1(veth3eb45d3) entered forwarding
> > state
> > [ 1025.979813] docker0: port 1(veth3eb45d3) entered disabled state
> > [ 1025.990858] veth7e4183e: renamed from eth0
> > [ 1026.087161] docker0: port 1(veth3eb45d3) entered disabled state
> > [ 1026.088367] device veth3eb45d3 left promiscuous mode
> > [ 1026.088471] docker0: port 1(veth3eb45d3) entered disabled state
> > 
> > root@isar:~# echo $?
> > 0
> > ```
> > 
> > This looks strange. Nothing changed (image hash is the same), but
> > the
> > second run works well. After rebooting qemu machine it still works.
> > 
> > Podman prebuilt image looks unaffected - it works from the
> > beginning.
> > 
> 
> Strange, all that used to work. You manually reproduced this as well,
> not only via the testsuite, right? Let me test again locally...
> 
> Jan
> 

For manual tests I used images taken from CI (that failed). As I could
see, the issue in my case was caused by zero-size "/bin/busybox"
somewhere in /var/lib/docker/overlay2/. The file was broken and
reinstalling the container fixed this.

But I guess this was caused by already "spoiled" image that was tested
in CI. When I just built (on a local machine) a new image and didn't
try to run qemu with it (e.g., didn't modify it), manual running docker
image in it worked well.. The busybox binary from alpine container was
OK in that case.

Continue debugging ...
Uladzimir Bely Aug. 6, 2024, 3:16 p.m. UTC | #9
On Tue, 2024-08-06 at 13:54 +0300, Uladzimir Bely wrote:
> On Tue, 2024-08-06 at 12:46 +0200, Jan Kiszka wrote:
> > On 06.08.24 11:48, Uladzimir Bely wrote:
> > > On Tue, 2024-08-06 at 07:48 +0300, Uladzimir Bely wrote:
> > > > On Mon, 2024-08-05 at 13:51 +0300, Uladzimir Bely wrote:
> > > > > On Mon, 2024-08-05 at 12:43 +0200, Jan Kiszka wrote:
> > > > > > On 05.08.24 11:40, Uladzimir Bely wrote:
> > > > > > > On Mon, 2024-08-05 at 11:17 +0200, Jan Kiszka wrote:
> > > > > > > > On 05.08.24 09:16, Uladzimir Bely wrote:
> > > > > > > > > From: Jan Kiszka <jan.kiszka@siemens.com>
> > > > > > > > > 
> > > > > > > > > This plugs the two example recipes for loading
> > > > > > > > > container
> > > > > > > > > images
> > > > > > > > > into
> > > > > > > > > VM-based testing. The test consists of running 'true'
> > > > > > > > > in
> > > > > > > > > the
> > > > > > > > > installed
> > > > > > > > > alpine images.
> > > > > > > > > 
> > > > > > > > > Rather than enabling the ci user to do password-less
> > > > > > > > > sudo,
> > > > > > > > > this
> > > > > > > > > uses su
> > > > > > > > > with the piped-in password. Another trick needed is
> > > > > > > > > to
> > > > > > > > > poll
> > > > > > > > > for
> > > > > > > > > the
> > > > > > > > > images because loading is performed asynchronously.
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> > > > > > > > > Signed-off-by: Uladzimir Bely <ubely@ilbers.de>
> > > > > > > > > ---
> > > > > > > > >  .../recipes-core/images/isar-image-ci.bb      |  2
> > > > > > > > > ++
> > > > > > > > >  testsuite/citest.py                           | 24
> > > > > > > > > +++++++++++++++++++
> > > > > > > > >  2 files changed, 26 insertions(+)
> > > > > > > > > 
> > > > > > > > > This is a drop-in replacement of patch 4 from "[PATCH
> > > > > > > > > v4
> > > > > > > > > 0/5]
> > > > > > > > > Introduce
> > > > > > > > > container fetcher and pre-loader" series:
> > > > > > > > > - Fixed syntax errors (incorrectly escaped '\$')
> > > > > > > > 
> > > > > > > > IIRC, we do need the escape inside the shell (sh -c
> > > > > > > > '...').
> > > > > > > > So,
> > > > > > > > you
> > > > > > > > likely rather need to escape the escape character.
> > > > > > > > 
> > > > > > > > Jan
> > > > > > > > 
> > > > > > > > > 
> > > > > > > 
> > > > > > > I just tried to make a simple check:
> > > > > > > 
> > > > > > > ```
> > > > > > > $ su -c 'for i in $(seq 3); do echo $i; done'
> > > > > > > Password: 
> > > > > > > 1
> > > > > > > 2
> > > > > > > 3
> > > > > > > 
> > > > > > > $ su -c 'for i in \$(seq 3); do echo $i; done'
> > > > > > > Password: 
> > > > > > > bash: -c: line 1: syntax error near unexpected token `('
> > > > > > > bash: -c: line 1: `for i in \$(seq 3); do echo $i; done'
> > > > > > > 
> > > > > > > $ su -c 'for i in \\$(seq 3); do echo $i; done'
> > > > > > > Password: 
> > > > > > > \1
> > > > > > > 2
> > > > > > > 3
> > > > > > > ```
> > > > > > > 
> > > > > > > We are likely don't need escaping at all.
> > > > > > 
> > > > > > Interesting - anyway, if this sequence is not properly
> > > > > > resolved,
> > > > > > the
> > > > > > test will fail. And I assume you had it running
> > > > > > successfully,
> > > > > > so
> > > > > > we
> > > > > > must
> > > > > > be fine.
> > > > > > 
> > > > > > > 
> > > > > > > Anyway, we could just convert the tests from
> > > > > > > "cmd=<long_command"
> > > > > > > to "script=test_prebuild_container.sh" and have test
> > > > > > > logic
> > > > > > > in a
> > > > > > > human-
> > > > > > > readable form.
> > > > > > > 
> > > > > > 
> > > > > > Also fine with me.
> > > > > > 
> > > > > > Jan
> > > > > > 
> > > > > 
> > > > > OK, I've already prepared the script internally and will
> > > > > check
> > > > > in
> > > > > CI
> > > > > with it.
> > > > > 
> > > > 
> > > > ... and still having problems with running commands inside
> > > > arm64
> > > > container.
> > > > 
> > > > I manually run (with same command-line as CI does) qemuamd64
> > > > and
> > > > qemuarm64 images.
> > > > 
> > > > Running prebuilt container in amd64 machine works well:
> > > > 
> > > > ```
> > > > root@isar:~# docker images
> > > > REPOSITORY              TAG       IMAGE ID       CREATED      
> > > > SIZE
> > > > quay.io/libpod/alpine   3.10.2    961769676411   4 years ago  
> > > > 5.58MB
> > > > root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> > > > [   61.233873] docker0: port 1(veth1c2b6f9) entered blocking
> > > > state
> > > > [   61.234280] docker0: port 1(veth1c2b6f9) entered disabled
> > > > state
> > > > [   61.240243] device veth1c2b6f9 entered promiscuous mode
> > > > [   62.650328] eth0: renamed from veth2aff680
> > > > [   62.664713] IPv6: ADDRCONF(NETDEV_CHANGE): veth1c2b6f9: link
> > > > becomes
> > > > ready
> > > > [   62.665407] docker0: port 1(veth1c2b6f9) entered blocking
> > > > state
> > > > [   62.665656] docker0: port 1(veth1c2b6f9) entered forwarding
> > > > state
> > > > [   62.666394] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link
> > > > becomes
> > > > ready
> > > > [   63.220542] docker0: port 1(veth1c2b6f9) entered disabled
> > > > state
> > > > [   63.229530] veth2aff680: renamed from eth0
> > > > [   63.308290] docker0: port 1(veth1c2b6f9) entered disabled
> > > > state
> > > > [   63.311282] device veth1c2b6f9 left promiscuous mode
> > > > [   63.311507] docker0: port 1(veth1c2b6f9) entered disabled
> > > > state
> > > > root@isar:~# echo $?
> > > > 0
> > > > root@isar:~# podman images
> > > > REPOSITORY             TAG         IMAGE ID      CREATED     
> > > > SIZE
> > > > quay.io/libpod/alpine  latest      961769676411  4 years ago 
> > > > 5.85 MB
> > > > root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
> > > > [   78.274955] cni-podman0: port 1(vethf6fde03e) entered
> > > > blocking
> > > > state
> > > > [   78.275225] cni-podman0: port 1(vethf6fde03e) entered
> > > > disabled
> > > > state
> > > > [   78.277667] device vethf6fde03e entered promiscuous mode
> > > > [   78.626628] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link
> > > > becomes
> > > > ready
> > > > [   78.627038] IPv6: ADDRCONF(NETDEV_CHANGE): vethf6fde03e:
> > > > link
> > > > becomes ready
> > > > [   78.627313] cni-podman0: port 1(vethf6fde03e) entered
> > > > blocking
> > > > state
> > > > [   78.627513] cni-podman0: port 1(vethf6fde03e) entered
> > > > forwarding
> > > > state
> > > > [   79.690462] audit: type=1400 audit(1722919083.116:6):
> > > > apparmor="STATUS" operation="profile_load" profile="unconfined"
> > > > name="containers-default-0.50.1" pid=750 comm="apparmor_parser"
> > > > [   80.574314] cni-podman0: port 1(vethf6fde03e) entered
> > > > disabled
> > > > state
> > > > [   80.575874] device vethf6fde03e left promiscuous mode
> > > > [   80.576060] cni-podman0: port 1(vethf6fde03e) entered
> > > > disabled
> > > > state
> > > > root@isar:~# echo $?
> > > > 0
> > > > ```
> > > > 
> > > > The same under arm64 fails:
> > > > 
> > > > ```
> > > > root@isar:~# docker images
> > > > REPOSITORY              TAG       IMAGE ID       CREATED      
> > > > SIZE
> > > > quay.io/libpod/alpine   3.10.2    915beeae4675   4 years ago  
> > > > 5.33MB
> > > > root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> > > > [  407.689016] docker0: port 1(veth81a2857) entered blocking
> > > > state
> > > > [  407.689231] docker0: port 1(veth81a2857) entered disabled
> > > > state
> > > > [  407.698637] device veth81a2857 entered promiscuous mode
> > > > [  410.003030] eth0: renamed from vethbe8a124
> > > > [  410.026357] IPv6: ADDRCONF(NETDEV_CHANGE): veth81a2857: link
> > > > becomes
> > > > ready
> > > > [  410.026727] docker0: port 1(veth81a2857) entered blocking
> > > > state
> > > > [  410.026872] docker0: port 1(veth81a2857) entered forwarding
> > > > state
> > > > [  410.767475] docker0: port 1(veth81a2857) entered disabled
> > > > state
> > > > [  410.788277] vethbe8a124: renamed from eth0
> > > > [  410.941958] docker0: port 1(veth81a2857) entered disabled
> > > > state
> > > > [  410.944534] device veth81a2857 left promiscuous mode
> > > > [  410.944676] docker0: port 1(veth81a2857) entered disabled
> > > > state
> > > > docker: Error response from daemon: failed to create shim task:
> > > > OCI
> > > > runtime create failed: runc create failed: unable to start
> > > > container
> > > > process: exec: "true": executable file not found in $PATH:
> > > > unknown.
> > > > root@isar:~# echo $?
> > > > 127
> > > > root@isar:~# podman images
> > > > REPOSITORY             TAG         IMAGE ID      CREATED     
> > > > SIZE
> > > > quay.io/libpod/alpine  latest      915beeae4675  4 years ago 
> > > > 5.59 MB
> > > > root@isar:~# podman run --rm quay.io/libpod/alpine:latest true
> > > > [  423.567388] cni-podman0: port 1(veth29135974) entered
> > > > blocking
> > > > state
> > > > [  423.567593] cni-podman0: port 1(veth29135974) entered
> > > > disabled
> > > > state
> > > > [  423.569719] device veth29135974 entered promiscuous mode
> > > > [  423.754420] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link
> > > > becomes
> > > > ready
> > > > [  423.754765] IPv6: ADDRCONF(NETDEV_CHANGE): veth29135974:
> > > > link
> > > > becomes ready
> > > > [  423.755036] cni-podman0: port 1(veth29135974) entered
> > > > blocking
> > > > state
> > > > [  423.755183] cni-podman0: port 1(veth29135974) entered
> > > > forwarding
> > > > state
> > > > [  426.090252] cni-podman0: port 1(veth29135974) entered
> > > > disabled
> > > > state
> > > > [  426.098292] device veth29135974 left promiscuous mode
> > > > [  426.098455] cni-podman0: port 1(veth29135974) entered
> > > > disabled
> > > > state
> > > > Error: runc: runc create failed: unable to start container
> > > > process:
> > > > exec: "true": executable file not found in $PATH: OCI runtime
> > > > attempted
> > > > to invoke a command that was not found
> > > > root@isar:~# echo $?
> > > > 127
> > > > ```
> > > > 
> > > > At first glance this looks like arm64 images are not
> > > > functional.
> > > > Continue debugging.
> > > > 
> > > 
> > > After some debugging I can see that something makes docker
> > > prebuilt
> > > image inside qemu broken. But removing it from and loading to
> > > docker
> > > engine again helps:
> > > 
> > > 
> > > ```
> > > root@isar:~# docker images
> > > REPOSITORY              TAG       IMAGE ID       CREATED      
> > > SIZE
> > > quay.io/libpod/alpine   3.10.2    915beeae4675   4 years ago  
> > > 5.33MB
> > > 
> > > root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> > > [  902.770874] docker0: port 1(veth8275b2c) entered blocking
> > > state
> > > [  902.771066] docker0: port 1(veth8275b2c) entered disabled
> > > state
> > > [  902.777051] device veth8275b2c entered promiscuous mode
> > > [  904.813519] eth0: renamed from veth2f2256f
> > > [  904.830269] IPv6: ADDRCONF(NETDEV_CHANGE): veth8275b2c: link
> > > becomes
> > > ready
> > > [  904.830857] docker0: port 1(veth8275b2c) entered blocking
> > > state
> > > [  904.830997] docker0: port 1(veth8275b2c) entered forwarding
> > > state
> > > [  904.831407] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link
> > > becomes
> > > ready
> > > [  905.372753] docker0: port 1(veth8275b2c) entered disabled
> > > state
> > > [  905.385163] veth2f2256f: renamed from eth0
> > > [  905.487707] docker0: port 1(veth8275b2c) entered disabled
> > > state
> > > [  905.491396] device veth8275b2c left promiscuous mode
> > > [  905.491533] docker0: port 1(veth8275b2c) entered disabled
> > > state
> > > docker: Error response from daemon: failed to create shim task:
> > > OCI
> > > runtime create failed: runc create failed: unable to start
> > > container
> > > process: exec: "true": executable file not found in $PATH:
> > > unknown.
> > > ERRO[0003] error waiting for container: context canceled 
> > > 
> > > root@isar:~# echo $?
> > > 127
> > > 
> > > root@isar:~# docker image rm 915beeae4675
> > > Untagged: quay.io/libpod/alpine:3.10.2
> > > Deleted:
> > > sha256:915beeae46751fc564998c79e73a1026542e945ca4f73dc841d09ccc6c
> > > 2c
> > > 0672
> > > Deleted:
> > > sha256:5e0d8111135538b8a86ce5fc969849efce16c455fd016bb3dc53131bce
> > > dc
> > > 4da5
> > > 
> > > root@isar:~# docker images
> > > REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
> > > 
> > > root@isar:~# pzstd -c -d /usr/share/prebuilt-docker-
> > > img/images/quay.io.libpod.alpine\:3.10.2.zst | docker load
> > > /usr/share/prebuilt-docker-
> > > img/images/quay.io.libpod.alpine:3.10.2.zst:
> > > 5598720 bytes 
> > > 5e0d81111355: Loading layer   5.59MB/5.59MB
> > > Loaded image: quay.io/libpod/alpine:3.10.2
> > > 
> > > root@isar:~# docker run --rm quay.io/libpod/alpine:3.10.2 true
> > > [ 1023.800568] docker0: port 1(veth3eb45d3) entered blocking
> > > state
> > > [ 1023.800790] docker0: port 1(veth3eb45d3) entered disabled
> > > state
> > > [ 1023.805585] device veth3eb45d3 entered promiscuous mode
> > > [ 1025.295999] eth0: renamed from veth7e4183e
> > > [ 1025.310388] IPv6: ADDRCONF(NETDEV_CHANGE): veth3eb45d3: link
> > > becomes
> > > ready
> > > [ 1025.310681] docker0: port 1(veth3eb45d3) entered blocking
> > > state
> > > [ 1025.310801] docker0: port 1(veth3eb45d3) entered forwarding
> > > state
> > > [ 1025.979813] docker0: port 1(veth3eb45d3) entered disabled
> > > state
> > > [ 1025.990858] veth7e4183e: renamed from eth0
> > > [ 1026.087161] docker0: port 1(veth3eb45d3) entered disabled
> > > state
> > > [ 1026.088367] device veth3eb45d3 left promiscuous mode
> > > [ 1026.088471] docker0: port 1(veth3eb45d3) entered disabled
> > > state
> > > 
> > > root@isar:~# echo $?
> > > 0
> > > ```
> > > 
> > > This looks strange. Nothing changed (image hash is the same), but
> > > the
> > > second run works well. After rebooting qemu machine it still
> > > works.
> > > 
> > > Podman prebuilt image looks unaffected - it works from the
> > > beginning.
> > > 
> > 
> > Strange, all that used to work. You manually reproduced this as
> > well,
> > not only via the testsuite, right? Let me test again locally...
> > 
> > Jan
> > 
> 
> For manual tests I used images taken from CI (that failed). As I
> could
> see, the issue in my case was caused by zero-size "/bin/busybox"
> somewhere in /var/lib/docker/overlay2/. The file was broken and
> reinstalling the container fixed this.
> 
> But I guess this was caused by already "spoiled" image that was
> tested
> in CI. When I just built (on a local machine) a new image and didn't
> try to run qemu with it (e.g., didn't modify it), manual running
> docker
> image in it worked well.. The busybox binary from alpine container
> was
> OK in that case.
> 
> Continue debugging ...
> 

So, there was my logical error I did in the test script. After polling
for docker images I wrongly got an error code so "docker run" was not
even started. This made CI test fail, qemu machine was interrupted and
this broke busybox binary (ext4 was not synced). So, on the next
(manual) boot it had a size 0 and nothing worked.

Currently I have a proper script that was run OK at least on three
different build machines, so I'll resend new patch soon.

Patch

diff --git a/meta-test/recipes-core/images/isar-image-ci.bb b/meta-test/recipes-core/images/isar-image-ci.bb
index e5d51e6e..9133da74 100644
--- a/meta-test/recipes-core/images/isar-image-ci.bb
+++ b/meta-test/recipes-core/images/isar-image-ci.bb
@@ -16,6 +16,7 @@  IMAGE_INSTALL += "sshd-regen-keys"
 
 # qemuamd64-bookworm
 WKS_FILE:qemuamd64:debian-bookworm ?= "multipart-efi.wks"
+IMAGE_INSTALL:append:qemuamd64:debian-bookworm = " prebuilt-docker-img prebuilt-podman-img"
 
 # qemuamd64-bullseye
 IMAGE_FSTYPES:append:qemuamd64:debian-bullseye ?= " cpio.gz tar.gz"
@@ -51,3 +52,4 @@  IMAGER_INSTALL:append:qemuarm:debian-bookworm ?= " ${SYSTEMD_BOOTLOADER_INSTALL}
 # qemuarm64-bookworm
 IMAGE_FSTYPES:append:qemuarm64:debian-bookworm ?= " wic.xz"
 IMAGER_INSTALL:append:qemuarm64:debian-bookworm ?= " ${GRUB_BOOTLOADER_INSTALL}"
+IMAGE_INSTALL:append:qemuarm64:debian-bookworm = " prebuilt-docker-img prebuilt-podman-img"
diff --git a/testsuite/citest.py b/testsuite/citest.py
index 7064c1e4..4a248a49 100755
--- a/testsuite/citest.py
+++ b/testsuite/citest.py
@@ -609,3 +609,27 @@  class VmBootTestFull(CIBaseTest):
             image='isar-image-ci',
             script='test_kernel_module.sh example_module',
         )
+
+    def test_amd64_bookworm_prebuilt_containers(self):
+        self.init()
+        self.vm_start(
+            'amd64', 'bookworm', image='isar-image-ci',
+            cmd='echo root | su -c \'PATH=$PATH:/usr/sbin;'
+                'for n in $(seq 30);'
+                '  do docker images | grep -q alpine && break; sleep 10; done;'
+                'docker run --rm quay.io/libpod/alpine:3.10.2 true && '
+                'for n in $(seq 30);'
+                '  do podman images | grep -q alpine && break; sleep 10; done;'
+                'podman run --rm quay.io/libpod/alpine:latest true\'')
+
+    def test_arm64_bookworm_prebuilt_containers(self):
+        self.init()
+        self.vm_start(
+            'arm64', 'bookworm', image='isar-image-ci',
+            cmd='echo root | su -c \'PATH=$PATH:/usr/sbin;'
+                'for n in $(seq 30);'
+                '  do docker images | grep -q alpine && break; sleep 10; done;'
+                'docker run --rm quay.io/libpod/alpine:3.10.2 true && '
+                'for n in $(seq 30);'
+                '  do podman images | grep -q alpine && break; sleep 10; done;'
+                'podman run --rm quay.io/libpod/alpine:latest true\'')