You are not logged in.
Hello there, I'm looking for someone well versed in networking to help me with a small issue.
My goal is to host a website inside a virtual machine, and connect to it from the browser of a different device that is in the same WLAN.
Virtual machine: Ubuntu 14.04 trusty (LXC)
Web server: nginx running PHP 7
On my local machine, I can connect either to the IP of the virtual machine, or to localhost:8080, because port 8080 is being forwarded to 80.
Based on this, I tried to connect via the IP of the VM from the second device, but it timed out. Then I tried the IP of the outside machine with the port 8080, but it wasn't found.
Just connecting to the IP of my local machine goes to the nginx server outside the virtual machine, so I figured it might be the problem and turned the server off, but it didn't help.
So my question is, how can I allow other devices to connect to my localhost, or how to redirect connections to my Arch Linux to my localhost inside the virtual machine?
Offline
Offline
I already have a bridge to share the internet connection with my VM. Is there a particular setting which I should have enabled?
6: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:54:81:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
In my LXC config I have this:
# Template used to create this container: /home/kkri/.vagrant.d/gems/2.2.5/gems/vagrant-lxc-1.2.1/scripts/lxc-template
# Parameters passed to the template: --tarball /home/kkri/.vagrant.d/boxes/drifter-VAGRANTSLASH-trusty64-php7/1.0.3/lxc/rootfs.tar.gz --config /home/kkri/.vagrant.d/boxes/drifter-VAGRANTSLASH-trusty64-php7/1.0.3/lxc/lxc-config
# For additional config options, please look at lxc.container.conf(5)
# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)
##############################################
# Container specific configuration (automatically set)
lxc.autodev = 1
lxc.rootfs = /var/lib/lxc/freitag_neo_default_1485787225445_52405/rootfs
lxc.rootfs.backend = dir
lxc.utsname = freitag_neo_default_1485787225445_52405
##############################################
# Network configuration (automatically set)
lxc.network.type = veth
lxc.network.link = virbr0
lxc.network.flags = up
##############################################
# vagrant-lxc base box specific configuration
# Default pivot location
lxc.pivotdir = lxc_putold
# Default mount entries
lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults 0 0
# Default console settings
lxc.devttydir = lxc
lxc.tty = 4
lxc.pts = 1024
# Default capabilities
lxc.cap.drop = sys_module mac_admin mac_override sys_time
# When using LXC with apparmor, the container will be confined by default.
# If you wish for it to instead run unconfined, copy the following line
# (uncommented) to the container's configuration file.
#lxc.aa_profile = unconfined
# To support container nesting on an Ubuntu host while retaining most of
# apparmor's added security, use the following two lines instead.
#lxc.aa_profile = lxc-container-default-with-nesting
#lxc.hook.mount = /usr/share/lxc/hooks/mountcgroups
# Uncomment the following line to autodetect squid-deb-proxy configuration on the
# host and forward it to the guest at start time.
#lxc.hook.pre-start = /usr/share/lxc/hooks/squid-deb-proxy-client
# If you wish to allow mounting block filesystems, then use the following
# line instead, and make sure to grant access to the block device and/or loop
# devices below in lxc.cgroup.devices.allow.
#lxc.aa_profile = lxc-container-default-with-mounting
# Default cgroup limits
lxc.cgroup.devices.deny = a
## Allow any mknod (but not using the node)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
## /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
## consoles
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
## /dev/{,u}random
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
## /dev/pts/*
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 136:* rwm
## rtc
lxc.cgroup.devices.allow = c 254:0 rm
## fuse
lxc.cgroup.devices.allow = c 10:229 rwm
## tun
lxc.cgroup.devices.allow = c 10:200 rwm
## full
lxc.cgroup.devices.allow = c 1:7 rwm
## hpet
lxc.cgroup.devices.allow = c 10:228 rwm
## kvm
lxc.cgroup.devices.allow = c 10:232 rwm
## To use loop devices, copy the following line to the container's
## configuration file (uncommented).
#lxc.cgroup.devices.allow = b 7:* rwm
##############################################
# vagrant-lxc container specific configuration
# VAGRANT-BEGIN
lxc.utsname=freitag.lo
lxc.mount.entry=/sys/fs/pstore sys/fs/pstore none bind,optional 0 0
lxc.mount.entry=tmpfs tmp tmpfs nodev,nosuid,size=2G 0 0
lxc.mount.entry=/home/kkri/projects/freitag_neo vagrant none bind,create=dir 0 0
# VAGRANT-END
Offline
so the computer where you have the container is connected via wifi?
Offline
so the computer where you have the container is connected via wifi?
Yes, I don't have the option to access the LAN at my workplace.
By the way I don't know if it's related, but when I try to SSH into my computer from an other in the same network, the same thing happens.
Offline
try to search on google if you don't find it in the archwiki, I set it up a long time ago so I can't tell you the precise steps.
However is doable and the outline is:
1- setup the bridge address to a different subnet than your wifi
2- containers should take an address in this subnet
2- use iptables to nat from home network to the subnet in which is your container
Offline