You are not logged in.

#1 2018-08-19 12:59:25

Corpswalker
Member
Registered: 2010-06-10
Posts: 15

Portforwarding with networkd towards nspawn container

Hi,
since last week I'm setting up my new homeserver (the third in 9 years now) sporting various on- and offline services. 
Since my last setup (my second server runned for 5 years with any big incidents 24/7) many new features matured and I decided to try out containers (nspawn) instead of VMs.

Sys infos:
Arch x86_64
systemd 239.0-2
linux 4.18.0-rc1-bae3d5443de1


I've managed to learn how to create a little virtual network (1 eth, 1 bridge, 1 static IP) with 2 nspawn container (1 veth/bridge by nspawn and 1 static IP )  using "Bridge=br0" option inside the .nspawn container files. These 2 containers get an own public IP address and are accessible over LAN.

I'm struggeling to find the correct setting for a new container (let's call it Cups) who uses host's internet access and is accessible through host's 631 port. I'm trying to avoid to publish the container itself into LAN.
The container has access to host and www (networkd, resolved and org.cups.cupsd are running). I can access cups Web UI in container, but not from host. The cupsd.conf is configured to allow all to contact the server on port 631. In other words, the container's dor seems to be wide shut open ^_^.

Following the config files and some console output are listed:

Host:
10-wired.network

[Match]
Name=en*

[Network]
Bridge=br0
IpForward=yes
IPMasquerade=yes

20-bridge0.netdev

[NetDev]
Name=br0
Kind=bridge

30-static.network

[Match]
Name=br0

[Network]
Address=192.168.1.91/24
Gateway=192.168.1.1
DNS=192.168.1.1

Cups container
cups.nspawn

[Files]
Bind=/var/cache/pacman/pkg
TemporaryFileSystem=/tmp

/// These files are created by nspawn
[Network]
Port=tcp:631:631

80-container-host0.network

[Match]
Virtualization=container
Name=host0

[Network]
DHCP=yes
LinkLocalAddressing=yes
LLDP=yes
EmitLLDP=customer-bridge

[DHCP]
UseTimezone=yes

80-container-ve.network

[Match]
Name=ve-*
Driver=veth

[Network]
# Default to using a /28 prefix, giving up to 13 addresses per container.
Address=0.0.0.0/28
LinkLocalAddressing=yes
DHCPServer=yes
IPMasquerade=yes
LLDP=yes
EmitLLDP=customer-bridge

80-container-vz.network

[Match]
Name=vz-*
Driver=bridge

[Network]
# Default to using a /24 prefix, giving up to 253 addresses per virtual network.
Address=0.0.0.0/24
LinkLocalAddressing=yes
DHCPServer=yes
IPMasquerade=yes
LLDP=yes
EmitLLDP=customer-bridge

networkctl

IDX LINK             TYPE               OPERATIONAL SETUP
  1 lo               loopback           carrier     unmanaged
  2 enp3s0           ether              degraded    configured
  3 br0              bridge             routable    configured
  4 vb-usenet        ether              degraded    unmanaged
  5 vb-fileserver    ether              degraded    unmanaged
  9 ve-cups          ether              routable    configured

ip -a on host

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether e0:d5:5e:bf:db:b8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::e2d5:5eff:febf:dbb8/64 scope link
       valid_lft forever preferred_lft forever
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:1c:36:e2:4f:55 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.91/24 brd 192.168.1.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::5c1c:36ff:fee2:4f55/64 scope link
       valid_lft forever preferred_lft forever
4: vb-usenet@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether 76:f6:94:e1:21:48 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::74f6:94ff:fee1:2148/64 scope link
       valid_lft forever preferred_lft forever
5: vb-fileserver@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
    link/ether 26:b6:12:b0:2b:6d brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::24b6:12ff:feb0:2b6d/64 scope link
       valid_lft forever preferred_lft forever
9: ve-cups@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5a:eb:d0:92:b4:1f brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet 169.254.112.13/16 brd 169.254.255.255 scope link ve-cups
       valid_lft forever preferred_lft forever
    inet 10.0.0.1/28 brd 10.0.0.15 scope global ve-cups
       valid_lft forever preferred_lft forever
    inet6 fe80::58eb:d0ff:fe92:b41f/64 scope link
       valid_lft forever preferred_lft forever

ip -a on container

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: host0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7e:6c:2f:f5:2c:7c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.163.3/16 brd 169.254.255.255 scope link host0
       valid_lft forever preferred_lft forever
    inet 10.0.0.14/28 brd 10.0.0.15 scope global dynamic host0
       valid_lft 2739sec preferred_lft 2739sec
    inet6 fe80::7c6c:2fff:fef5:2c7c/64 scope link
       valid_lft forever preferred_lft forever

iptables -t nat -L -n -v (host)

Chain PREROUTING (policy ACCEPT 4260 packets, 2246K bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:631 ADDRTYPE match dst-type LOCAL to:10.0.0.14:631

Chain INPUT (policy ACCEPT 42 packets, 7652 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 40 packets, 2700 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0           !127.0.0.0/8          tcp dpt:631 ADDRTYPE match dst-type LOCAL to:10.0.0.14:631

Chain POSTROUTING (policy ACCEPT 30 packets, 2104 bytes)
 pkts bytes target     prot opt in     out     source               destination
  114 17893 MASQUERADE  all  --  *      *       10.0.0.0/28          0.0.0.0/0

Thank you in advance for your input!

*Edit*
Trying to reach cups' WebUI i get following output:
curl -v cups:631

 Rebuilt URL to: cups:631/
*   Trying fe80::7c6c:2fff:fef5:2c7c...
* TCP_NODELAY set
* connect to fe80::7c6c:2fff:fef5:2c7c port 631 failed: Connection refused
*   Trying 169.254.163.3...
* TCP_NODELAY set
* connect to 169.254.163.3 port 631 failed: Connection refused
*   Trying 10.0.0.14...
* TCP_NODELAY set
* connect to 10.0.0.14 port 631 failed: Connection refused
* Failed to connect to cups port 631: Connection refused
* Closing connection 0
curl: (7) Failed to connect to cups port 631: Connection refused

Last edited by Corpswalker (2018-08-20 16:53:56)

Offline

#2 2018-08-20 21:13:17

Corpswalker
Member
Registered: 2010-06-10
Posts: 15

Re: Portforwarding with networkd towards nspawn container

Update:

after refresing my iptables memories i found the culprit (quite stupid of me not to see it): "ADDRTYPE match dst-type LOCAL" in the PREROUTING rule. This option enables only routing of calls coming from same local network. This local network is the internal 10.0.0.0/8 one, therefore only calling it from inside the HOST using it's external IP address was succesfull.

My actual workaround is an iptables systemd service which adds the same NAT rules without the LOCAL network restriction.

I tried to find a way to avoid this extra step by modifying the network configuration files of the container (which trigger the creation of these iptable rules on the host side) in order to create rules without this limitation.

Does somebody know a solution to this kind of problem by relying solely on nspawn/networkd configuration files?

Last edited by Corpswalker (2018-08-24 18:48:10)

Offline

#3 2018-08-24 06:25:04

gdkags
Member
Registered: 2010-10-12
Posts: 18

Re: Portforwarding with networkd towards nspawn container

From what I've read, just setting "Port=" is not enough (systemd.nspawn says it's a privileged option).
try adding this to your .nspawn:

[Exec]
Capability=CAP_NET_ADMIN CAP_NET_BIND

Offline

#4 2018-08-25 14:44:42

Corpswalker
Member
Registered: 2010-06-10
Posts: 15

Re: Portforwarding with networkd towards nspawn container

HI,

I tried your suggestion, but the generated iptable entries didn't change and the port forwarding didn't work.
I restarted the host between the .nspawn change. ;-)

I "fooled" a bit around and made following observations/steps:

windows client :
- ping hostIp resolved to ip of host
- ping hostname resolved to ipv6 of host's bridge interface (i didn't expected that, router in between doesn't use ipv6, only the internal virtual network of my homeserver)

cups container
- changed entry of 80-container-host0.network: "DHCP=yes" => "DHCP=ipv4"
- restarted networkd of container and host => only ipv4 IPs (since no ip6tables entries were generated and I'm to lazy to enable its systemd service and write new rules)

Had zero effect, so i reverted teh changes and inserted ip6table rules, but couldn't reach the container via telnet ip ::7c6c:2fff:fef5:2c7c so I might have reached a dead end on ipv6

windows client
Cups WebUi accessible over IP and hostname
Printing using hostname doesn't work, printer can't be found by windows when searching the printer with http://heimserver:631/printers/Brother_QL-700
Printing using IP doesn't work , printer is found by windows when searching the printer with http://*ip*:631/printers/Brother_QL-700

I'm dazzled here, makes no sense to me that I cannot print with host's IP.  I tried to run the cups server on host using same cups and printer configs and siabling iptable rules =>  Prints well on booth ip and hostname.

I think I'm missing several details here ^_^, maybe my thread is better placed inside the newbie corner!

Is there more information I could post for further ideas?

Offline

Board footer

Powered by FluxBB