You are not logged in.
Pages: 1
I created lxc container with alpine linux. Everything worked, I could start, stop, login to container, but there was no network. When I try to setup network according to this wiki: https://wiki.archlinux.org/index.php/Li … figuration I get this error:
[root@home playtime]# lxc-start -n playtime
lxc-start: playtime: lxccontainer.c: wait_on_daemonized_start: 864 Received container state "ABORTING" instead of "RUNNING"
lxc-start: playtime: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: playtime: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
lxc-start: playtime: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
and I can't even start container until I redo setup network for container.
I tried to create log file for lxc, but it was empty (even with DEBUG level) with command:
lxc-start -n playtime -L /var/log/playtime.log -l DEBUG
I use NetworkManager.service for Ethernet connection.
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:1d:60:17:51:a8 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.4/24 brd 192.168.1.255 scope global dynamic noprefixroute enp2s0
valid_lft 75185sec preferred_lft 75185sec
inet6 fe80::82fb:a22e:49c3:ea03/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.1/24 scope global lxcbr0
valid_lft forever preferred_lft forever
[root@home playtime]# systemctl status lxc-net.service
● lxc-net.service - LXC network bridge setup
Loaded: loaded (/usr/lib/systemd/system/lxc-net.service; disabled; vendor preset: disabled)
Active: active (exited) since Wed 2019-02-27 11:41:20 CET; 2h 5min ago
Process: 1438 ExecStart=/usr/lib/lxc/lxc-net start (code=exited, status=0/SUCCESS)
Main PID: 1438 (code=exited, status=0/SUCCESS)
Tasks: 1 (limit: 2370)
Memory: 2.5M
CGroup: /system.slice/lxc-net.service
└─1473 dnsmasq -u dnsmasq --strict-order --bind-interfaces --pid-file=/run/lxc/dnsmasq.pid --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=2>
lut 27 11:41:20 home systemd[1]: Starting LXC network bridge setup...
lut 27 11:41:20 home dnsmasq[1473]: uruchomiony, wersja 2.80, 150 miejsc w pamięci podręcznej
lut 27 11:41:20 home dnsmasq[1473]: opcje kompilacji: IPv6 GNU-getopt DBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify dumpfile
lut 27 11:41:20 home dnsmasq-dhcp[1473]: DHCP, zakres IP 10.0.3.2 -- 10.0.3.254, czas dzierżawy 1h
lut 27 11:41:20 home dnsmasq-dhcp[1473]: DHCP, gniazda dowiązane na wyłączność interfejsowi lxcbr0
lut 27 11:41:20 home dnsmasq[1473]: czytanie /etc/resolv.conf
lut 27 11:41:20 home dnsmasq[1473]: używam serwera nazw 192.168.1.254#53
lut 27 11:41:20 home dnsmasq[1473]: wczytałem /etc/hosts - 0 adresów
lut 27 11:41:20 home systemd[1]: Started LXC network bridge setup.
[root@home ~]# lxc-checkconfig
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
--- Control groups ---
Cgroups: enabled
Cgroup v1 mount points:
/sys/fs/cgroup/systemd
/sys/fs/cgroup/rdma
/sys/fs/cgroup/devices
/sys/fs/cgroup/net_cls,net_prio
/sys/fs/cgroup/hugetlb
/sys/fs/cgroup/cpu,cpuacct
/sys/fs/cgroup/cpuset
/sys/fs/cgroup/blkio
/sys/fs/cgroup/perf_event
/sys/fs/cgroup/freezer
/sys/fs/cgroup/pids
/sys/fs/cgroup/memory
Cgroup v2 mount points:
/sys/fs/cgroup/unified
Cgroup v1 clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
--- Misc ---
Veth pair device: enabled, not loaded
Macvlan: enabled, not loaded
Vlan: enabled, not loaded
Bridges: enabled, not loaded
Advanced netfilter: enabled, not loaded
CONFIG_NF_NAT_IPV4: enabled, not loaded
CONFIG_NF_NAT_IPV6: enabled, not loaded
CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded
CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, not loaded
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, not loaded
FUSE (for use with lxcfs): enabled, loaded
--- Checkpoint/Restore ---
checkpoint restore: enabled
CONFIG_FHANDLE: enabled
CONFIG_EVENTFD: enabled
CONFIG_EPOLL: enabled
CONFIG_UNIX_DIAG: enabled
CONFIG_INET_DIAG: enabled
CONFIG_PACKET_DIAG: enabled
CONFIG_NETLINK_DIAG: enabled
File capabilities:
Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
I want to setup any kind of network to container(s) (I want to create more containers), but when I can't connect them to internet, there is no sense for creating new containers.
Last edited by xerxes_ (2019-03-06 12:27:53)
Offline
Hello there,
This might not answer your question but have you tried LXD? I'd personally rate it 2x easier to configure and run containers with LXC?
Regards
Offline
No, I didn't tried LXD for 2 reasons:
1. I saw in arch wiki that it is based on LXC and network configuration is also based on LXC, so it looks like more or less as complex as LXC configuration.
2. LXD is not in main repo, only in aur repo and I try to avoid aur repo for now, because I don't feel so much experienced arch user.
I tried systemd-nspawn, but it is so slow on my old hardware (very slow boot ~15-20 minutes) that I will not use it (but with systemd-nspawn network worked).
I think I try maybe once more with LXC manuals, different sites, etc. Maybe arch wiki is just somewhat outdated: https://wiki.archlinux.org/index.php/Linux_Containers.
Or maybe I try docker.
Moreover, I saw in manuals, arch wiki that with containers I have to install whole system. Is it possible to run in container only one or few apps without installing whole system (something like in firejail)? I know that firejail is only sandbox, but I want to do something like that with some kind container (it must be container by definition).
Offline
+1 for LXC over LXD.
Post the container config
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
The problem, Ubuntu has decided to stop supporting LXC over LXD. You going to hit a brick wall. What part of LXD didn't work? If you dead determined to use just LXC, the config might help? However, as I said, unless someone else picks up LXC.
LXD is a bit more straight forward.
Offline
The problem, Ubuntu has decided to stop supporting LXC over LXD.
Why does this factor into the equation? Upstream != Ubuntu.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
Hi @graysky
Yeah, my experience, lxc was Ubuntu sponsored technology. I have a lot of issues with LXC, and every bug report I submitted to Redhat or Github has been, it's fixed in LXD. I've stuck with LXC for the longest, but using LXD is quite accommodating IMHO.
Regards
FYI, I have three Proxmox servers here that use LXC but they're much easier to maintain.
Offline
@bugs - To each his own. The point I wanted to clarify is that LXC upstream and Ubuntu not packaging it are two independent things. I don't much care what other distros package. LXC is has an active upstream. in fact, if you search closed issues I reported on their github, their developers are insanely responsive.
I have a lot of issues with LXC
I don't, I find it to be highly stable and customize. I use lxc containerized servers for pi-hole, wireguard, openvpn, nextcloud, and a few other uses. No issues. We are detracting from the OP's issue with networks.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
I don't, I find it to be highly stable and customize. I use lxc containerized servers for pi-hole, wireguard, openvpn, nextcloud, and a few other uses. No issues. We are detracting from the OP's issue with networks.
Same.
in fact, if you search closed issues I reported on their github, their developers are insanely responsive.
I'm glad to hear that.
So we both can work together to help. Cheers
Offline
I'm sorry for my silence, but i was busy.
This is my default config of container which works (lxc-start, stop, console - login), but network is not working (no setup for network):
# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template:
# Template script checksum (SHA-1): 273c51343604eb85f7e294c8da0a5eb769d648f3
# For additional config options, please look at lxc.container.conf(5)
# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64
# Container specific configuration
lxc.rootfs.path = dir:/var/lib/lxc/playtime/rootfs
lxc.uts.name = playtime
# Network configuration
lxc.net.0.type = empty
#lxc.net.0.type = veth
#lxc.net.0.name = veth0
#lxc.net.0.flags = up
#lxc.net.0.link = bridge
#lxc.net.0.hwaddr = ee:ec:fa:e9:56:7d
#lxc.net.0.type = veth
#lxc.net.0.flags = up
#lxc.net.0.link = br0
#lxc.net.0.name = eth0
#lxc.net.0.hwaddr = 4a:49:43:49:79:bf
#lxc.net.0.ipv4.address = 10.2.3.5/24 10.2.3.255
#lxc.net.0.ipv6.address = 2003:db8:1:0:214:1234:fe0b:3597
# uncomment the next two lines if static IP addresses are needed
# leaving these commented will imply DHCP networking
#
#lxc.net.0.ipv4.address = 192.168.0.3/24
#lxc.net.0.ipv4.gateway = 192.168.0.1
Default /etc/lxc/default.conf:
lxc.net.0.type = empty
But when I change files and start systemctl start lxc-net.service:
/var/lib/lxc/playtime:
# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template:
# Template script checksum (SHA-1): 273c51343604eb85f7e294c8da0a5eb769d648f3
# For additional config options, please look at lxc.container.conf(5)
# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)
# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64
# Container specific configuration
lxc.rootfs.path = dir:/var/lib/lxc/playtime/rootfs
lxc.uts.name = playtime
# Network configuration
#lxc.net.0.type = empty
#lxc.net.0.type = veth
#lxc.net.0.name = veth0
#lxc.net.0.flags = up
#lxc.net.0.link = bridge
#lxc.net.0.hwaddr = ee:ec:fa:e9:56:7d
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = br0
lxc.net.0.name = eth0
lxc.net.0.hwaddr = 4a:49:43:49:79:bf
#lxc.net.0.ipv4.address = 10.2.3.5/24 10.2.3.255
#lxc.net.0.ipv6.address = 2003:db8:1:0:214:1234:fe0b:3597
# uncomment the next two lines if static IP addresses are needed
# leaving these commented will imply DHCP networking
#
#lxc.net.0.ipv4.address = 192.168.0.3/24
#lxc.net.0.ipv4.gateway = 192.168.0.1
/etc/lxc/default.conf:
#lxc.net.0.type = empty
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:3a:f1:c1
#lxc.net.0.name = eth0
#lxc.idmap = u 0 100000 65536
#lxc.idmap = g 0 100000 65536
#lxc.net.0.type = veth
#lxc.net.0.link = br0
#lxc.net.0.flags = up
and try to start container I get error, like in first post:
[root@home lxc]# lxc-start -n playtime
lxc-start: playtime: lxccontainer.c: wait_on_daemonized_start: 864 Received container state "ABORTING" instead of "RUNNING"
lxc-start: playtime: tools/lxc_start.c: main: 330 The container failed to start
lxc-start: playtime: tools/lxc_start.c: main: 333 To get more details, run the container in foreground mode
lxc-start: playtime: tools/lxc_start.c: main: 336 Additional information can be obtained by setting the --logfile and --logpriority options
I noticed that I also lose auto-completition of container names.
I also added file /etc/default/lxc-net according to https://wiki.archlinux.org/index.php/Linux_Containers
# Leave USE_LXC_BRIDGE as "true" if you want to use lxcbr0 for your
# containers. Set to "false" if you'll use virbr0 or another existing
# bridge, or mavlan to your host's NIC.
USE_LXC_BRIDGE="true"
# If you change the LXC_BRIDGE to something other than lxcbr0, then
# you will also need to update your /etc/lxc/default.conf as well as the
# configuration (/var/lib/lxc/<container>/config) for any containers
# already created using the default config to reflect the new bridge
# name.
# If you have the dnsmasq daemon installed, you'll also have to update
# /etc/dnsmasq.d/lxc and restart the system wide dnsmasq daemon.
LXC_BRIDGE="lxcbr0"
LXC_ADDR="10.0.3.1"
LXC_NETMASK="255.255.255.0"
LXC_NETWORK="10.0.3.0/24"
LXC_DHCP_RANGE="10.0.3.2,10.0.3.254"
LXC_DHCP_MAX="253"
# Uncomment the next line if you'd like to use a conf-file for the lxcbr0
# dnsmasq. For instance, you can use 'dhcp-host=mail1,10.0.3.100' to have
# container 'mail1' always get ip address 10.0.3.100.
#LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf
# Uncomment the next line if you want lxcbr0's dnsmasq to resolve the .lxc
# domain. You can then add "server=/lxc/10.0.3.1' (or your actual $LXC_ADDR)
# to your system dnsmasq configuration file (normally /etc/dnsmasq.conf,
# or /etc/NetworkManager/dnsmasq.d/lxc.conf on systems that use NetworkManager).
# Once these changes are made, restart the lxc-net and network-manager services.
# 'container1.lxc' will then resolve on your host.
#LXC_DOMAIN="lxc"
and not changed /etc/default/lxc.
So when I modify files, I can't start container.
Offline
Hello again,
Out of curiosity, how did you create that image ?
Offline
I created image according to arch wili https://wiki.archlinux.org/index.php/Linux_Containers
lxc-create -n playtime -t download
then I chose alpine, release 3.9, architecture amd64.
It successfully created image, so I could lxc-ls -f,start, stop it. To login I try lxc-console -n name_of_image, but couldn't login, so I use lxc-attach -n CONTAINER_NAME --clear-env, create normal user, and login through lxc-console -n playtime (I didn't change container name from wiki), as normal user, than su to root.
All worked well, accept there was no network, so I want to setup network (to update system and install some packeges) just as it was in wiki, and help in config files, but when I edit configs, container don't start.
So how should I setup network for container, so it can start and network work?
Can I create image for single application by only copy needed files, libs, etc. to container rootfs (/var/lib/lxc/container_name/rootfs), and create config file for container in /var/lib/lxc/container_name/config ?
Offline
Alright, the default config here /etc/lxc/default.conf only get's parse on container creation which is why it should have been configured before hand for networking. See example config below:
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:x:x:x
Ideally, you wouldn't want all your containers to preserve the same mac address.
Is your bridge up and running?
ip a
Offline
Thanks! It worked!
I configured file /etc/lxc/default.conf just like you said and created new container and network works!
So it looks like I didn't understand wiki that I have to config network before I create container.
But I have one more question and I will close this thread: do I have to have whole system in container to start single program? How to setup minimal container (please, at least hint)?
Offline
I've never tested it, I wouldn't know how to do it. Perhaps https://wiki.archlinux.org/index.php/Firejail might be a better alternative?
Offline
I fought to use readelf, ldd, or something like that to find out what program binary needs, or just pacman -Sii, and add this to container system, but maybe it would be simplier to use some lightweight linux like alpine.
Anyway, thanks for all responses.
Marking thread as solved.
Last edited by xerxes_ (2019-03-06 12:26:56)
Offline
Pages: 1