You are not logged in.
Hi,
I am playing with Docker and I realized that I am not able to start any service with systemctl.
I made a simple openssh installation and when I do
systemctl start sshd.service
I got
Failed to get D-Bus connection: Operation not permitted
It is like sshd.service has no entry in systemd, should I need to create one manually ? (I guess it is not as simple as this should be done automatically, I think it is more a problem related to systemd itself which is not running)
I read lot of articles regarding Docker & systemd, I tried as well with --privileged option but no way to solve my issue.
It is important for me as for this test I am testing with openssh but I would base all my containers with systemctl enable (even if Docker is not a pure VM solution it would be a good way to achieve the same objective for me)
If anyone has successfully make it works I would love to see how, even to get some explanation why this is not working.
Many thanks :-)
belette
Last edited by belette (2015-02-26 15:07:14)
Offline
Systemd doesn't run inside your container unless you tell it to. Typically that's not the case...you run one process per container wherever possible. It's not really designed to be a full VM replacement. For an sshd container, you would set the entrypoint (or cmd) to something like '/usr/bin/sshd -D'. You need to go read some good basic tutorials to get a better understanding about how it's supposed to work. The official Docker tutorial is good, as is this Digital Ocean tutorial series. You will get frustrated very quickly trying to dive right in without a more-than-basic understanding first!
Scott
Last edited by firecat53 (2015-02-24 20:06:09)
Offline
Many thanks for your quick reply
Yes I do agree that Docker is not the same as VM but I would like at least to be able to "assign" more than one process/app per container (for example ssh + apache, ssh + postfix ...)
this is the reason why I am trying to see how to make systemd works inside the container to manage more than one service easily in one container.
If you have any idea on how to that I would be very happy
Many thanks for the documentation, I am going to read it now
Offline
I would not recommend running SSH in any of the containers for maintenance reasons. Use an SSH server on the host and use
# docker exec -it <name> <shell>
to enter the container. Use volumes to copy stuff in and out.
If you want to run multiple processes (which in my opinion is totally fine), I use supervisord to achieve that.
http://supervisord.org/
http://docs.docker.com/articles/using_supervisord/
Offline
Many thanks mychris!
Yes I use to do a docker attach rather than a docker exec but it is the same idea.
Have you got some example where using SSH is not a good idea inside a container?
When you speak about volume, do you mean the -v option in the container to share with the host?
Is is an secure approach doing that and create one folder per container in the host then?
Thanks for the link (supervisord), so you are using it instead of playing with systemd ?
Last edited by belette (2015-02-25 11:43:17)
Offline
Thanks for the link (supervisord), so you are using it instead of playing with systemd ?
Yes, I do not use systemd inside containers. When I have a container which has to run multiple processes I use supervisord to manage the processes.
Have you got some example where using SSH is not a good idea inside a container?
I think it is not a good idea because normally you don't need it. If you have SSH access to the host machine you don't need to SSH into containers. A container can expose parts of its Filesystem using something like
VOLUME ["/data"]
in the Dockerfile (see https://docs.docker.com/reference/builder/#volume). With those you can create temporary containers to backup data (see http://docs.docker.com/userguide/dockervolumes/).
When you speak about volume, do you mean the -v option in the container to share with the host?
Is is an secure approach doing that and create one folder per container in the host then?
You don't have to bind mount a folder from the host machine. If you define a VOLUME in the Dockerfile, the container will expose this volume to other containers. You can create a throw away container using something like
# docker run -it --volumes-from <container which exposes a volume> --volume $(pwd):/backup ubuntu bash
Now this container will have mounted all the exposed volumes from <container which exposes a volume> and will have /backup bind mounted to the current working directory on the host. This way you can copy files from <container which exposes a volume> to the host easy. (You don't have to run an interactive shell, you could also just copy the stuff)
Another example from the docker docs:
# docker run --volumes-from dbdata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
In my opinion it depends on what you wanna do with the container. You can also create data containers which only hold data for other containers (and share volumes).
I personally use bind mounts from the host if I have to change the data often. I think this is a matter of taste. Find a strategy yourself. But if you don't think about this, you will end up running an SSH server in every container you have.
Offline
Many thanks for all your detailed replies it helps me a lot understanding better the philosophy under Docker.
I agree that having SSH to the host is enough and avoid to manage one SSH server per container, then the host acts as a "hub" to access to every containers, I like it!
Thanks for your vision regarding the volume and data sharing I think your method is definitely a good one!
For networking I already worked on it and integrating with Openvswitch then I can expose the adressing & vlan I want (I know I can use some link between containers and some other network approach, but I like Openvswitch as I can use some monitoring and some other advanced features)
Now I need to have a deep look on your links and the Docker documentation (I already read the one from Digital Ocean firecat53 sent me)
Offline
mychris I have a little question for you.
I read all the Docker documentation and everything is clear for me now, I made some tests and for now I am able to do what I need.
My idea is to create one container per "task/service" (mail / web / wiki ...).
In case of mail container I would need to have postfix / davmail and roundcube running, I am wondering what would be the best practices in Docker world?
How should I get all theses 3 process up & running? in the same container? in more than one? (I guess all can work but I am trying to understand the best architecture design before starting all my work)
Many thanks again!
Offline
I can't answer your question. I create containers as I see fit and make a setup which is the easiest for me.
Maybe you should ask your question at the docker forums https://forums.docker.com/.
For me it is easier to have a webserver running in every container which serves a web service and have a nginx proxy container (https://registry.hub.docker.com/u/jwilder/nginx-proxy/). But thats just how I do it, others may recommend a different set up and I am no docker expert.
Offline
You are right it is more a Docker related question.
Many thanks for all your help
Offline