You are not logged in.
Pages: 1
Topic closed
Hi there, I just updated to the latest stable kernel which is 4.15.1 and noticed that I can't run some of my docker containers anymore. Good example is any centos image older than version 7. If I try to run "docker run --rm -ti --name test centos:6.9 /bin/bash" container terminates and exit code is 139 (which afaik is SIGSEGV). I tested this on 2 different machines and got same results, so I'm assuming this is not related to my docker setup. I wonder what is this issue cased by? Latest spectre/meltdown mitigations or some other changes in kernel?
Any help/info is welcome!
Thanks!
Last edited by Gonzih (2018-02-20 15:19:24)
Offline
Same issue here, maybe some valid info from this issue https://github.com/moby/moby/issues/35891
Docker logs:
kenaco@kenaco-szn-arch:~$ journalctl -b -u docker
-- Logs begin at Sun 2017-07-02 23:43:34 CEST, end at Thu 2018-02-08 08:08:49 CET. --
feb 08 07:54:43 kenaco-szn-arch systemd[1]: Starting Docker Application Container Engine...
feb 08 07:54:43 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:43.843311841+01:00" level=info msg="libcontainerd: started new docker-containerd process" pid=750
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="starting containerd" module=containerd revision=89623f28b87a6004d4b785663257362d1658a729 version=v1.0.0
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="setting subreaper..." module=containerd
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="changing OOM score to -500" module=containerd
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.content.v1.content"..." module=containerd type=io.containerd.content.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." module=containerd type=io.containerd.snapshotter.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be us
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." module=containerd type=io.containerd.snapshotter.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." module=containerd type=io.containerd.metadata.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used wit
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." module=containerd type=io.containerd.differ.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." module=containerd type=io.containerd.gc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." module=containerd type=io.containerd.monitor.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." module=containerd type=io.containerd.runtime.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." module=containerd type=io.containerd.grpc.v1
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock" module="containerd/debug"
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock" module="containerd/grpc"
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44+01:00" level=info msg="containerd successfully booted in 0.082273s" module=containerd
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44.197798931+01:00" level=warning msg="failed to rename /var/lib/docker/tmp for background deletion: rename /var/lib/docker/tmp /var/lib/docker/tmp-old: file exists. Deleting synchronously"
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44.542512138+01:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44.542760850+01:00" level=warning msg="Your kernel does not support cgroup rt period"
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44.542776099+01:00" level=warning msg="Your kernel does not support cgroup rt runtime"
feb 08 07:54:44 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:44.543256532+01:00" level=info msg="Loading containers: start."
feb 08 07:54:45 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:45.539473691+01:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
feb 08 07:54:45 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:45.947606640+01:00" level=info msg="Loading containers: done."
feb 08 07:54:46 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:46.044022049+01:00" level=info msg="Docker daemon" commit=03596f51b1 graphdriver(s)=overlay version=18.01.0-ce
feb 08 07:54:46 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:46.044762117+01:00" level=info msg="Daemon has completed initialization"
feb 08 07:54:46 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:46.054457940+01:00" level=info msg="API listen on /var/run/docker.sock"
feb 08 07:54:46 kenaco-szn-arch dockerd[721]: time="2018-02-08T07:54:46.054468803+01:00" level=info msg="API listen on [::]:2376"
feb 08 07:54:46 kenaco-szn-arch systemd[1]: Started Docker Application Container Engine.
feb 08 08:08:08 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:08+01:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/3296f2732de62f36d89ae4149e6b24da5c8833a2b52d8f8a277301616e729792/shim.sock" debug=false module="containerd/ta
feb 08 08:08:08 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:08+01:00" level=info msg="shim reaped" id=3296f2732de62f36d89ae4149e6b24da5c8833a2b52d8f8a277301616e729792 module="containerd/tasks"
feb 08 08:08:08 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:08.827945640+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
feb 08 08:08:12 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:12.530369801+01:00" level=info msg="Layer sha256:aa3fb424f3c9f995919ee2fd506448fc66cfd5251aa61c9439aed75705195578 cleaned up"
feb 08 08:08:19 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:19+01:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/1de7c47488abe153a0938770bf333a41c2a0f19df36ca164511543eb2a99c68e/shim.sock" debug=false module="containerd/ta
feb 08 08:08:19 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:19+01:00" level=info msg="shim reaped" id=1de7c47488abe153a0938770bf333a41c2a0f19df36ca164511543eb2a99c68e module="containerd/tasks"
feb 08 08:08:19 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:19.919427554+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
feb 08 08:08:21 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:21+01:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/1285297bbebe404b7ca30a868d4ffff29c469d25a00e479d7c890fac3a86fc40/shim.sock" debug=false module="containerd/ta
feb 08 08:08:21 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:21+01:00" level=info msg="shim reaped" id=1285297bbebe404b7ca30a868d4ffff29c469d25a00e479d7c890fac3a86fc40 module="containerd/tasks"
feb 08 08:08:21 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:21.468642230+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
feb 08 08:08:25 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:25+01:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/d663ef302cdf1f828e1a534e21e3a2e4fd94789b62a8dc444078e42fcd123236/shim.sock" debug=false module="containerd/ta
feb 08 08:08:26 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:26+01:00" level=info msg="shim reaped" id=d663ef302cdf1f828e1a534e21e3a2e4fd94789b62a8dc444078e42fcd123236 module="containerd/tasks"
feb 08 08:08:26 kenaco-szn-arch dockerd[721]: time="2018-02-08T08:08:26.081815685+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Offline
Here are my logs:
Docker logs with --debug flag on docker daemon:
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.864743116-05:00" level=debug msg="Calling GET /_ping"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.866651887-05:00" level=debug msg="Calling POST /v1.35/containers/create?name=test"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.866883280-05:00" level=debug msg="form data: {\"AttachStderr\":true,\"AttachStdin\":false,\"AttachStdout\":true,\"Cmd\":[\"/bin/bash\"],\"Domainname\":\"\",\"Entrypoint\":null,\"Env\":[],\"HostConfig\":{\"AutoRemove\":true,\"Binds\":null,\"BlkioDeviceReadBps\":null,\"BlkioDeviceReadIOps\":null,\"BlkioDeviceWriteBps\":null,\"BlkioDeviceWriteIOps\":null,\"BlkioWeight\":0,\"BlkioWeightDevice\":[],\"CapAdd\":null,\"CapDrop\":null,\"Cgroup\":\"\",\"CgroupParent\":\"\",\"ConsoleSize\":[0,0],\"ContainerIDFile\":\"\",\"CpuCount\":0,\"CpuPercent\":0,\"CpuPeriod\":0,\"CpuQuota\":0,\"CpuRealtimePeriod\":0,\"CpuRealtimeRuntime\":0,\"CpuShares\":0,\"CpusetCpus\":\"\",\"CpusetMems\":\"\",\"DeviceCgroupRules\":null,\"Devices\":[],\"DiskQuota\":0,\"Dns\":[],\"DnsOptions\":[],\"DnsSearch\":[],\"ExtraHosts\":null,\"GroupAdd\":null,\"IOMaximumBandwidth\":0,\"IOMaximumIOps\":0,\"IpcMode\":\"\",\"Isolation\":\"\",\"KernelMemory\":0,\"Links\":null,\"LogConfig\":{\"Config\":{},\"Type\":\"\"},\"Memory\":0,\"MemoryReservation\":0,\"MemorySwap\":0,\"MemorySwappiness\":-1,\"NanoCpus\":0,\"NetworkMode\":\"default\",\"OomKillDisable\":false,\"OomScoreAdj\":0,\"PidMode\":\"\",\"PidsLimit\":0,\"PortBindings\":{},\"Privileged\":false,\"PublishAllPorts\":false,\"ReadonlyRootfs\":false,\"RestartPolicy\":{\"MaximumRetryCount\":0,\"Name\":\"no\"},\"SecurityOpt\":null,\"ShmSize\":0,\"UTSMode\":\"\",\"Ulimits\":null,\"UsernsMode\":\"\",\"VolumeDriver\":\"\",\"VolumesFrom\":null},\"Hostname\":\"\",\"Image\":\"centos:6.9\",\"Labels\":{},\"NetworkingConfig\":{\"EndpointsConfig\":{}},\"OnBuild\":null,\"OpenStdin\":false,\"StdinOnce\":false,\"Tty\":false,\"User\":\"\",\"Volumes\":{},\"WorkingDir\":\"\"}"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.896535413-05:00" level=debug msg="container mounted via layerStore: &{/var/lib/docker/overlay2/0b030048f70a45e9f5b028614f662175b6805fb836c0075d4c4c7b40db200a4e/merged 0x563a8123e0e0 0x563a8123e0e0}"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.915273286-05:00" level=debug msg="Calling POST /v1.35/containers/ed66da8714b9aab35c7fda7925ca1917f3aa895c6abf52b5298794ecd9254d3b/attach?stderr=1&stdout=1&stream=1"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.915357213-05:00" level=debug msg="attach: stderr: begin"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.915360739-05:00" level=debug msg="attach: stdout: begin"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.915551551-05:00" level=debug msg="Calling POST /v1.35/containers/ed66da8714b9aab35c7fda7925ca1917f3aa895c6abf52b5298794ecd9254d3b/wait?condition=removed"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.915825482-05:00" level=debug msg="Calling POST /v1.35/containers/ed66da8714b9aab35c7fda7925ca1917f3aa895c6abf52b5298794ecd9254d3b/start"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.916356266-05:00" level=debug msg="container mounted via layerStore: &{/var/lib/docker/overlay2/0b030048f70a45e9f5b028614f662175b6805fb836c0075d4c4c7b40db200a4e/merged 0x563a8123e0e0 0x563a8123e0e0}"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.916517648-05:00" level=debug msg="Assigning addresses for endpoint test's interface on network bridge"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.916529227-05:00" level=debug msg="RequestAddress(LocalDefault/172.17.0.0/16, <nil>, map[])"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.919838343-05:00" level=debug msg="Assigning addresses for endpoint test's interface on network bridge"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.924266439-05:00" level=debug msg="Programming external connectivity on endpoint test (6c81e263d2a9c4bec9fd895d1da748239b13d6bf10a73d41877284efa16bde0e)"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.925886314-05:00" level=debug msg="EnableService ed66da8714b9aab35c7fda7925ca1917f3aa895c6abf52b5298794ecd9254d3b START"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.925900281-05:00" level=debug msg="EnableService ed66da8714b9aab35c7fda7925ca1917f3aa895c6abf52b5298794ecd9254d3b DONE"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08.927917404-05:00" level=debug msg="bundle dir created" bundle=/var/run/docker/containerd/ed66da8714b9aab35c7fda7925ca1917f3aa895c6abf52b5298794ecd9254d3b module=libcontainerd namespace=moby root=/var/lib/docker/overlay2/0b030048f70a45e9f5b028614f662175b6805fb836c0075d4c4c7b40db200a4e/merged
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08-05:00" level=debug msg="event published" module="containerd/containers" ns=moby topic="/containers/create" type=containerd.events.ContainerCreate
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08-05:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/ed66da8714b9aab35c7fda7925ca1917f3aa895c6abf52b5298794ecd9254d3b/shim.sock" debug=true module="containerd/tasks" pid=9911
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08-05:00" level=debug msg="registering ttrpc server"
Feb 08 11:29:08 xps dockerd[7687]: time="2018-02-08T11:29:08-05:00" level=debug msg="serving api on unix socket" socket="[inherited from parent]"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.037020022-05:00" level=debug msg="sandbox set key processing took 47.253755ms for container ed66da8714b9aab35c7fda7925ca1917f3aa895c6abf52b5298794ecd9254d3b"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09-05:00" level=debug msg="event published" module="containerd/tasks" ns=moby topic="/tasks/create" type=containerd.events.TaskCreate
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.120698854-05:00" level=debug msg=event module=libcontainerd namespace=moby topic=/tasks/create
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09-05:00" level=debug msg="event published" module="containerd/tasks" ns=moby topic="/tasks/start" type=containerd.events.TaskStart
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.134926960-05:00" level=debug msg=event module=libcontainerd namespace=moby topic=/tasks/start
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09-05:00" level=debug msg="event published" module="containerd/events" ns=moby topic="/tasks/exit" type=containerd.events.TaskExit
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.308681916-05:00" level=debug msg=event module=libcontainerd namespace=moby topic=/tasks/exit
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.314022328-05:00" level=debug msg="attach: stdout: end"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.314030308-05:00" level=debug msg="attach: stderr: end"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09-05:00" level=debug msg="received signal" module=containerd signal=child exited
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09-05:00" level=info msg="shim reaped" id=ed66da8714b9aab35c7fda7925ca1917f3aa895c6abf52b5298794ecd9254d3b module="containerd/tasks"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09-05:00" level=debug msg="event published" module="containerd/tasks" ns=moby topic="/tasks/delete" type=containerd.events.TaskDelete
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.337311187-05:00" level=debug msg=event module=libcontainerd namespace=moby topic=/tasks/delete
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.337358665-05:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.337944199-05:00" level=debug msg="Revoking external connectivity on endpoint test (6c81e263d2a9c4bec9fd895d1da748239b13d6bf10a73d41877284efa16bde0e)"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.341484078-05:00" level=debug msg="DeleteConntrackEntries purged ipv4:0, ipv6:0"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.409872859-05:00" level=debug msg="Releasing addresses for endpoint test's interface on network bridge"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09.409935013-05:00" level=debug msg="ReleaseAddress(LocalDefault/172.17.0.0/16, 172.17.0.2)"
Feb 08 11:29:09 xps dockerd[7687]: time="2018-02-08T11:29:09-05:00" level=debug msg="event published" module="containerd/containers" ns=moby topic="/containers/delete" type=containerd.events.ContainerDelete
"docker events" output:
2018-02-08T11:17:31.452627421-05:00 container create 89d75fdc4115bcd886b3f19f4af07d28d4317f97fc0ce7bbbfda8723ad0e6f92 (build-date=20170406, image=centos:6.9, license=GPLv2, name=test, vendor=CentOS)
2018-02-08T11:17:31.454503887-05:00 container attach 89d75fdc4115bcd886b3f19f4af07d28d4317f97fc0ce7bbbfda8723ad0e6f92 (build-date=20170406, image=centos:6.9, license=GPLv2, name=test, vendor=CentOS)
2018-02-08T11:17:31.476152585-05:00 network connect 9fe4c3e90d140c2d9a77478722fb962ff8beb9e8fc7331373bf84a6289464c6d (container=89d75fdc4115bcd886b3f19f4af07d28d4317f97fc0ce7bbbfda8723ad0e6f92, name=bridge, type=bridge)
2018-02-08T11:17:31.675236026-05:00 container start 89d75fdc4115bcd886b3f19f4af07d28d4317f97fc0ce7bbbfda8723ad0e6f92 (build-date=20170406, image=centos:6.9, license=GPLv2, name=test, vendor=CentOS)
2018-02-08T11:17:31.676080887-05:00 container resize 89d75fdc4115bcd886b3f19f4af07d28d4317f97fc0ce7bbbfda8723ad0e6f92 (build-date=20170406, height=51, image=centos:6.9, license=GPLv2, name=test, vendor=CentOS, width=126)
2018-02-08T11:17:31.936985713-05:00 container die 89d75fdc4115bcd886b3f19f4af07d28d4317f97fc0ce7bbbfda8723ad0e6f92 (build-date=20170406, exitCode=139, image=centos:6.9, license=GPLv2, name=test, vendor=CentOS)
2018-02-08T11:17:32.016656026-05:00 network disconnect 9fe4c3e90d140c2d9a77478722fb962ff8beb9e8fc7331373bf84a6289464c6d (container=89d75fdc4115bcd886b3f19f4af07d28d4317f97fc0ce7bbbfda8723ad0e6f92, name=bridge, type=bridge)
2018-02-08T11:17:32.071840691-05:00 container destroy 89d75fdc4115bcd886b3f19f4af07d28d4317f97fc0ce7bbbfda8723ad0e6f92 (build-date=20170406, image=centos:6.9, license=GPLv2, name=test, vendor=CentOS)
Dowgrading kernel to 4.14-* solved the issue as a temporary solution. In my case I used arch achive to do upgrade (put "Server=https://archive.archlinux.org/repos/2017/02/01/$repo/os/$arch" in to the /etc/pacmad.d/mirrorlist file and execute "pacman -Suuyy") for more info on downgrading refer to the documentation
Last edited by Gonzih (2018-02-08 17:57:30)
Offline
I'm afraid this may be either a kernel problem or some configuration option. In my case I'm using a centos5 chroot which refuses to run with segmentation faults so I've also been bitten by this.
It doesn't work either with systemd-nspawn or arch-chroot so I believe it is safe to say that it is not some interaction between systemd and a new kernel feature.
I still didn't have more time to investigate this further except trying with systemd-nspawn / arch-chroot and with the current LTS kernel (4.14) which works without problems. One possible investigation avenue might be checking the kernel build configuration file. I noticed this problem yesterday while helping one of my teachers replicate my setup on his machine, the difference is that he is using manjaro, and for him it worked with kernel 4.15.0, so either it is a commit that came after 4.15.0 that breaks things or it is some configuration difference.
I won't have much time to dig further into this before the weekend and I would like to avoid having to bisect the kernel, so lets see if someone else chimes in with more clues before going with the bisect option.
R00KIE
Tm90aGluZyB0byBzZWUgaGVyZSwgbW92ZSBhbG9uZy4K
Offline
I just want to chime in and say I have noticed the exact same thing; updated to `4.15.2-2-ARCH` today, and my centos 6 images will not run, while my centos 7 images will.
Offline
I was looking into this yesterday I think I know what the problem is, but I haven't recompiled a kernel yet to find out for sure.
My suspicion to the cause of the problem is this change in the kernel config file
-CONFIG_X86_VSYSCALL_EMULATION=y
+# CONFIG_X86_VSYSCALL_EMULATION is not set
From asking google and looking at the information here[1] my suspicion goes to that change but it's possible that the problem is something else entirely.
[1] https://cateee.net/lkddb/web-lkddb/X86_ … ATION.html
Edit:
I have recompiled the current 4.15.2-2 kernel with CONFIG_X86_VSYSCALL_EMULATION=y and that by itself did not fix the problem, however if vsyscall=emulate is added to the kernel command line then things work.
The changes that made things break seems to be this:
-CONFIG_X86_VSYSCALL_EMULATION=y
+# CONFIG_X86_VSYSCALL_EMULATION is not set
-CONFIG_LEGACY_VSYSCALL_EMULATE=y
-# CONFIG_LEGACY_VSYSCALL_NONE is not set
+# CONFIG_LEGACY_VSYSCALL_EMULATE is not set
+CONFIG_LEGACY_VSYSCALL_NONE=y
If I'm not wrong, this means disabling the x86 vsyscall completely without any way to re-enable it and also changing the default behaviour if x86 vsyscall emulation was enabled to none (the default according to the documentation should be emulate).
Edit 2:
Feature request to re-enable X86_VSYSCALL_EMULATION submitted to the bug tracker: https://bugs.archlinux.org/task/57462
Last edited by R00KIE (2018-02-10 16:40:46)
R00KIE
Tm90aGluZyB0byBzZWUgaGVyZSwgbW92ZSBhbG9uZy4K
Offline
Bug still in 4.15.3-1
Offline
Hi,
I was having the same issue with 4.15.3-2-ARCH.
I've added:
vsyscall=emulate
as kernel option and now it works
https://bugs.archlinux.org/task/57336
Thanks
Last edited by C0is@s (2018-02-19 11:11:49)
Offline
Seems to be solved in 4.15.3-2 kernel with vsyscall=emulate kernel option. Marking thread as solved.
Offline
Same issue here I solved it by adding the kernel parameter, as mention previously!:
sudo vim /etc/default/grub
GRUB_CMDLINE_LINUX="vsyscall=emulate"
sudo grub-mkconfig -o /boot/grub/grub.cfg
sudo reboot
Offline
Welcome to the forums kapare
Please take the time to read our fourm Code of Conduct, especially...
https://wiki.archlinux.org/index.php/Co … mpty_posts
Closing this resolved thread.
Offline
Pages: 1
Topic closed