You are not logged in.
Hi everyone,
I'm using nginx as reverse proxy to make sites from my private networks publically available, to have the caching-benefit and for easy SSL protection of unprotected sites. I'm also using it along with Docker. And the later is where my problem starts:
Nginx is running inside of a Docker container. There are also multiple different containers on the same host, which provide different Web-Apps like jenkins, piwik, redmine, and so on. When working with Docker, linking containers is a common technique. It works that way, that usually containers cannot access each other. Unlike you link them: Then not only you can access them, but Docker also writes the DNS alias to /etc/hosts, so that, for example, your MySQL Server, which got the (randomly picked) IP 172.17.0.31 is always available as "mysql". So, what I currently do is configure nginx's "proxy_pass" option to point to "http://jenkins", for example. When the Docker container "jenkins" and nginx are linked, this works great.
But if something happens to the jenkins - container and it isn't starting up correctly, then nginx refuses to start, too, since it cannot resolve "jenkins" anymore.
Even I Googled heavy, I couldn't find a solution to this. What I want is that when you've 20 sites to use proxy_pass on and one isn't available, nginx starts nevertheless but skips the one site which doesn't work or behaves unexpected, but the other 19 sites are still working. Currently, every error in any Docker container leads to a complete blackout of the whole landscape ...
Any ideas on how to make this possible?
Offline
What about https://registry.hub.docker.com/u/jwilder/nginx-proxy/?
I use this image as a reverse proxy for some time now and it works greate, no linking required.
Offline
Thanks for your reply.
Linking is actually wanted for some of the containers, so nobody has to care about IPs, "real" DNS Names for the sake of proxying, and so on. The solution should be more in that direction, that nginx dynamically loads/not load configs based upon the fact if a container is reachable or silently ignore missing upstream DNS.
Offline
nginx-proxy dynamically creates ngninx configuration files. I don't know exactly how it works internally, but it has a connection to the docker daemon running on the host and as soon as a valid container (maybe they check for the environment variables or something like that) shows up, the configuration file gets changed.
You don't have to care about IPs. nginx-proxy figures out the IP of the container using the docker daemon.
Maybe try it out. From your reply, it looks like it does exactly what you want.
Offline
Hi mychris,
you are right, it does. I will look into that the next days - even if it shouldn't fit 100%, maybe the know-how can be extracted at least. Thank you
Offline
FWIW, I have a number of web services fronted by an nginx reverse-proxy container, and the other services don't get killed if one container goes down. The nginx container gets restarted when the broken container is restarted (all under systemd control), so that does temporarily remove the connection, but only for the period of the restart.
I tried the nginx-proxy container a couple of times but for some reason I don't seem to be smart enough to get it working! It also wouldn't work completely for me anyways because it only handles one port and some of my containers require 2 or more.
Scott
Offline