NGINX service failure in Dockers on limiting memory and CPU usage - nginx

I have one Master and 5 worker nodes, I am using the following command while deploying the nginx service.
It fails-
docker service create --name foo -p 32799:80 -p 32800:443 nginx --limit-cpu 0.5 --limit-memory 512M
On the other hand this works-
docker service create --name foo -p 32799:80 -p 32800:443 nginx
Please let me know how do I reduce my CPU to 1 core and limit memory to 512M

Change your command to the following and try again:
docker service create --limit-cpu 0.5 --limit-memory 512M --name foo -p 32799:80 -p 32800:443 nginx
Anything following the image name is treated as COMMAND and parameters.

Related

Docker and Rancher - Run multiple workers

I need to run 3 commands to run my application:
$ celery -A name worker
$ daphne name.asgi:channel_layer -b 0.0.0.0 -p 8000
$ python manage.py runworker
I need to do this for the same image, I do not know if it is viable to create a container for each command. What should I do?
Thanks for your help.
I realized that they are all services, there must be a container for each one.

Docker Container Networking with Docker-in-Docker

I would like to network with a child docker container from a parent docker container, with a docker-in-docker setup.
Let's say I'm trying to connect to a simple Apache httpd server. When I run the httpd container on my host machine, everything works fine:
asnyder:~$ docker run -d -p 8080:80 httpd:alpine
asnyder:~$ curl localhost:8080
<html><body><h1>It works!</h1></body></html>
But when I do the same from a docker-in-docker setup, I get a Connection refused error:
asnyder:~$ docker run -d --name mydind --privileged docker:dind
asnyder:~$ docker run -it --link mydind:docker docker:latest sh
/ # docker run -d -p 8080:80 httpd:alpine
/ # curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
I have tried a couple alterations without luck. Specifying the 0.0.0.0 interface:
asnyder:~$ docker run -d --name mydind --privileged docker:dind
asnyder:~$ docker run -it --link mydind:docker docker:latest sh
/ # docker run -d -p 0.0.0.0:8080:80 httpd:alpine
/ # curl 0.0.0.0:8080
curl: (7) Failed to connect to 0.0.0.0 port 8080: Connection refused
Using the host network:
asnyder:~$ docker run -d --name mydind --privileged docker:dind
asnyder:~$ docker run -it --link mydind:docker docker:latest sh
/ # docker run -d --network host httpd:alpine
/ # curl localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
Surprisingly, I was unable to find any existing articles on this. Does anyone here have some insight?
Thanks!
There are pros and cons for both DinD and bind mounting the Docker socket and there are certainly use cases for both. As an example, check out this set of blog posts, which does a good job of explaining one of the use cases.
Given your example docker-in-docker setup above, you can access Apache httpd server in one of two ways:
1) From inside the docker:dind container, it will be available on localhost:8080.
2) From inside the docker:latest container, where you were trying to access it originally, it will be available on whatever hostname is set for the docker:dind container. In this case, you used --name mydind, therefore curl mydind:8080 would give you the standard Apache <html><body><h1>It works!</h1></body></html>.
Hope it makes sense!
Building upon Yuriy's answer:
2) From inside the docker:latest container, [...] it will be available on whatever hostname is set for the docker:dind container. In this case, you used --name mydind, therefore curl mydind:8080 [...]
In the Gitlab CI config, you can address the DinD container by the name of its image (in addition to the name of its container, which is auto-generated):
Accessing the services
Let’s say that you need a Wordpress instance to test some API integration with your application.
You can then use for example the tutum/wordpress image in your .gitlab-ci.yml:
services:
- tutum/wordpress:latest
If you don’t specify a service alias, when the job is run, tutum/wordpress will be started and you will have access to it from your build container under two hostnames to choose from:
tutum-wordpress
tutum__wordpress
Using
service:
- docker:dind
will allow you to access that container as docker:8080:
script:
- docker run -d -p 8080:80 httpd:alpine
- curl docker:8080
Edit: If you'd prefer a more explicit host name, you can, as the documentation states, use an alias:
services:
- name: docker:dind
alias: dind-service
and then
script:
- docker run -d -p 8080:80 httpd:alpine
- curl dind-service:8080
Hth,
dtk
I am very convinced that #Yuriy Znatokov's answer is what I want, but I have understood it for a long time. In order to make it easier for later people to understand, I have exported the complete steps.
1) From inside the docker:dind container
docker run -d --name mydind --privileged docker:dind
/ # docker run -d -p 8080:80 httpd:alpine
/ # curl localhost:8080
<html><body><h1>It works!</h1></body></html>
2) From inside the docker:latest container
docker run -d --name mydind --privileged docker:dind
docker run -it --link mydind:docker docker:latest sh
/ # docker run -d -p 8080:80 httpd:alpine
/ # curl mydind:8080
<html><body><h1>It works!</h1></body></html>

How to properly start nginx in Docker

I want nginx in a Docker container to host a simple static hello world html website. I want to simply start it with "docker run imagename". In order to do that I added the run parameters to the Dockerfile. The reason I want to do that is that I would like to host the application on Cloud Foundry in a next step. Unfortunately I get the following error when doing it like this.
Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 5000
CMD ["nginx -d -p 5000:5000"]
Error
Error starting userland proxy: Bind for 0.0.0.0:5000: unexpected error Permission denied.
From ::
https://docs.docker.com/engine/reference/builder/#expose
EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number
CMD ["nginx -d -p 5000:5000"]
You add your dockerfile
FROM nginx:alpine
its already starts nginx.
after you build from your dockerfile
you should use this on
docker run -d -p 5000:5000 <your_image>
Edit:
If you want to use docker port 80 -> machine port 5000
docker run -d -p 5000:80 <your_image>

Unable to connect to Docker Nginx build

I am trying to host a simple static site using the Docker Nginx Image from Dockerhub: https://registry.hub.docker.com/_/nginx/
A note on my setup, I am using boot2docker on OSX.
I have followed the instructions and even I cannot connect to the running container:
MacBook-Pro:LifeIT-war-games-frontend ryan$ docker build -t wargames-front-end .
Sending build context to Docker daemon 813.6 kB
Sending build context to Docker daemon
Step 0 : FROM nginx
---> 42a3cf88f3f0
Step 1 : COPY app /usr/share/nginx/html
---> Using cache
---> 61402e6eb300
Successfully built 61402e6eb300
MacBook-Pro:LifeIT-war-games-frontend ryan$ docker run --name wargames-front-end -d -p 8080:8080 wargames-front-end
9f7daa48a25bdc09e4398fed5d846dd0eb4ee234bcfe89744268bee3e5706e54
MacBook-Pro:LifeIT-war-games-frontend ryan$ curl localhost:8080
curl: (52) Empty reply from server
MacBook-Pro:LifeIT-war-games-frontend ryan$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f7daa48a25b wargames-front-end:latest "nginx -g 'daemon of 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:8080->8080/tcp, 443/tcp wargames-front-end
Instead of localhost, use boot2docker ip. First do boot2docker ip and use that ip:
<your-b2d-ip>:8080. Also you need to make sure you forwarded your port 8080 in VirtualBox for boot2docker.
Here is the way to connect nginx docker container service:
docker ps # confirm nginx is running, which you have done.
docker port wargames-front-end # get the ports, for example: 80/tcp, 0.0.0.0:8080->8080/tcp, 443/tcp
boot2docker ip # get the IP address, for example: 192.168.59.103
So now, you should be fine to connect to:
http://192.168.59.103:8080
https://192.168.59.103:8080
Here's how I got it to work.
docker kill wargames-front-end
docker rm wargames-front-end
docker run --name wargames-front-end -d -p 8080:80 wargames-front-end
Then I went to my virtualbox and setup these settings:

Stop a Nginx Docker container

I am trying to stop a Docker container running Nginx only after there has been no activity in the access.log of that Nginx instance for a period of time.
Is it possible to stop a Docker container from inside the container? The other solution I can think of is to have a cron running on the host OS that checks the /var/lib/docker/aufs/mnt/[container id]/ but I am planning on starting lots of containers and would prefer not to have to keep a list of IDs.
The docker container stops when the main process in the container stops.
I setup a little dockerfile and a start script to show how this could work in your case:
Dockerfile
FROM nginx
COPY start.sh /
CMD ["/start.sh"]
start.sh
#!/bin/bash
nginx &
sleep 20
# replace sleep 20 with your test of inactivity
nginx stop
Build container, run and test
$ docker build -t ng .
$ docker run -d ng
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a373e721da7 ng:latest "/start.sh" 4 seconds ago Up 3 seconds 443/tcp, 80/tcp distracted_colden
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a373e721da7 ng:latest "/start.sh" 16 seconds ago Up 16 seconds 80/tcp, 443/tcp distracted_colden
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$
You could share your docker sock within that docker image and then do any operations necessary.
to share docker sock within the docker image do something like this:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker YOUR_IMAGE
inside ENV vars you will have your container id, example to run within container echo $HOSTNAME
I ran an nginx container and then wasn't able to fire it up again:
nginx: [emerg] bind() to unix:/var/run/nchan.sock failed (98: Address already in use)
The easiest fix was to just "prune":
docker system prune
Docker can run command in your running container using the exec command:
docker exec [-d|--detach[=false]] [--help] [-i|--interactive[=false]] [-t|--tty[=false]] CONTAINER COMMAND [ARG...]

Resources