You are not logged in.
Hello,
I am playing a bit with podman and I am unable to `exec` into a running container as non-root user.
The following commands, when run under root, start nginx and later run shell inside the container.
podman pull nginx:1.17.9
podman run --detach --name nginx --publish 8889:80 nginx:1.17.9
podman exec -it -w / nginx /bin/sh
# Now, I have a shell inside the container.
However, when the same is executed as a normal user, the last command terminates with following (however, the container and the server in it are running):
podman exec -it -w / nginx /bin/sh
Error: writing file `/sys/fs/cgroup//user.slice/user-1000.slice/user@1000.service/user.slice/libpod-ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced.scope/cgroup.procs`: Permission denied: OCI runtime permission denied error
Here with debug logging
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/vojta/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/vojta/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/vojta/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/vojta/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] No store required. Not opening container store.
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] using runtime "/usr/bin/crun"
WARN[0000] Failed to add podman to systemd sandbox cgroup: Process org.freedesktop.systemd1 exited with status 1
INFO[0000] running as rootless
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/vojta/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/vojta/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/vojta/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/vojta/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] No store required. Not opening container store.
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] Handling terminal attach
DEBU[0000] Creating new exec session in container ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced with session id f9363c531ea3c5fb3ed5e668878a26d80ca9c4f89ca8b08dba230ae157ddbc4e
DEBU[0000] /usr/bin/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/bin/conmon args="[--api-version 1 -s -c ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced -u f9363c531ea3c5fb3ed5e668878a26d80ca9c4f89ca8b08dba230ae157ddbc4e -r /usr/bin/crun -b /home/vojta/.local/share/containers/storage/vfs-containers/ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced/userdata/f9363c531ea3c5fb3ed5e668878a26d80ca9c4f89ca8b08dba230ae157ddbc4e -p /home/vojta/.local/share/containers/storage/vfs-containers/ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced/userdata/f9363c531ea3c5fb3ed5e668878a26d80ca9c4f89ca8b08dba230ae157ddbc4e/exec_pid -l k8s-file:/home/vojta/.local/share/containers/storage/vfs-containers/ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced/userdata/f9363c531ea3c5fb3ed5e668878a26d80ca9c4f89ca8b08dba230ae157ddbc4e/exec_log --exit-dir /home/vojta/.local/share/containers/storage/vfs-containers/ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced/userdata/f9363c531ea3c5fb3ed5e668878a26d80ca9c4f89ca8b08dba230ae157ddbc4e/exit --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog -t -i -e --exec-attach --exec-process-spec /home/vojta/.local/share/containers/storage/vfs-containers/ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced/userdata/f9363c531ea3c5fb3ed5e668878a26d80ca9c4f89ca8b08dba230ae157ddbc4e/exec-process-030455427]"
INFO[0000] Running conmon under slice user.slice and unitName libpod-conmon-ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced.scope
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied
WARN[0000] Failed to add conmon to systemd sandbox cgroup: Process org.freedesktop.systemd1 exited with status 1
DEBU[0000] Attaching to container ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced exec session f9363c531ea3c5fb3ed5e668878a26d80ca9c4f89ca8b08dba230ae157ddbc4e
DEBU[0000] connecting to socket /run/user/1000/libpod/tmp/socket/f9363c531ea3c5fb3ed5e668878a26d80ca9c4f89ca8b08dba230ae157ddbc4e/attach
DEBU[0000] Received: 0
DEBU[0000] Received a resize event: {Width:158 Height:38}
DEBU[0000] Received: -256
ERRO[0000] [conmon:d]: exec with attach is waiting for start message from parent
[conmon:d]: exec with attach got start message from parent
writing file `/sys/fs/cgroup//user.slice/user-1000.slice/user@1000.service/user.slice/libpod-ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced.scope/cgroup.procs`: Permission denied: OCI runtime permission denied error
I am running up-to-date system with minimum configuration tweaks. Changes to default configuration that are relevant to this is that I have disabled v1 cgroups (via cgroup_no_v1=all).
$ podman info
host:
BuildahVersion: 1.14.2
CgroupVersion: v2
Conmon:
package: Unknown
path: /usr/bin/conmon
version: 'conmon version 2.0.12, commit: 682e9587bff927565ec942592129a02c7d410a50'
Distribution:
distribution: arch
version: unknown
IDMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
MemFree: 3129786368
MemTotal: 8102670336
OCIRuntime:
name: crun
package: Unknown
path: /usr/bin/crun
version: |-
crun version 0.13
commit: e79e4de4ac16da0ce48777afb72c6241de870525
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
SwapFree: 0
SwapTotal: 0
arch: amd64
cpus: 4
eventlogger: journald
hostname: nonsuch.d3s.hide.ms.mff.cuni.cz
kernel: 5.5.10-arch1-1
os: linux
rootless: true
slirp4netns:
Executable: /usr/bin/slirp4netns
Package: Unknown
Version: |-
slirp4netns version 0.4.3
commit: 2244b9b6461afeccad1678fac3d6e478c28b4ad6
uptime: 12h 23m 3.03s (Approximately 0.50 days)
registries:
search:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
- registry.centos.org
store:
ConfigFile: /home/vojta/.config/containers/storage.conf
ContainerStore:
number: 2
GraphDriverName: vfs
GraphOptions: {}
GraphRoot: /home/vojta/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 4
RunRoot: /run/user/1000/containers
VolumePath: /home/vojta/.local/share/containers/storage/volumes
I followed the steps for Buildah on the Wiki and since the container is running I assume the setup is basically healthy.
The rwx permissions on the mentioned file seems okay (-rw-r--r--).
If I try to run crun manually (I believe it is executed by podman internally) I get the same error:
crun exec -t ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced /bin/sh
2020-03-25T09:26:01.000938201Z: writing file `/sys/fs/cgroup//user.slice/user-1000.slice/user@1000.service/user.slice/libpod-ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced.scope/cgroup.procs`: Permission denied
Finally, if I try to add a process to the container from shell (not sure if it is possible to do it that easily but it demonstrates the problem, at least, I believe), it does not work as well:
( echo $$ >>/sys/fs/cgroup//user.slice/user-1000.slice/user@1000.service/user.slice/libpod-ad2f4611e3033fd57eb74579e51e22c866d77f6d2203ccec30101d5c2fa63ced.scope/cgroup.procs )
bash: echo: write error: Permission denied
Note that when PID is invalid (nonexistent process, the error message is bash: echo: write error: No such process).
I would be grateful for any ideas what to check as I have no clue how to continue.
Thank you.
Last edited by vhotspur (2020-04-03 17:59:58)
Offline
So the problem is somehow related to Dbus. I accidentally found this bugreport and if I run podman as
( export DBUS_SESSION_BUS_ADDRESS=; podman exec -it -w / nginx /bin/sh
it works. So now I have to find out what is messing up my dbus. However, the original problem is probably solved, I can use following workaround for now:
alias podman='env DBUS_SESSION_BUS_ADDRESS= podman
Offline