Docker and Rancher - Run multiple workers - asynchronous

I need to run 3 commands to run my application:
$ celery -A name worker
$ daphne name.asgi:channel_layer -b 0.0.0.0 -p 8000
$ python manage.py runworker
I need to do this for the same image, I do not know if it is viable to create a container for each command. What should I do?
Thanks for your help.

I realized that they are all services, there must be a container for each one.

Related

Can we create more then 2 riak cluster

Can we setup Riak Cluster with only 2 nodes like this
node01
node02
or we add more cluster or less then 2 cluster if we can then please let me how we can achieve that .
It will depends if you are running your cluster under docker or not.
If you are under docker
You can in this case start a new riak node with the command
docker run -d -P -e COORDINATOR_NODE=172.17.0.3 --label cluster.name=<your main node name> basho/riak-kv
For more explaination about this line you can go on the basho post: Running Riak in Docker
If you are not under a docker container
As I didn't experimented this case personally I will only link the documentation to run a new node on a riak server: Running a Cluster
Hope I understood the question correctly

How to properly start nginx in Docker

I want nginx in a Docker container to host a simple static hello world html website. I want to simply start it with "docker run imagename". In order to do that I added the run parameters to the Dockerfile. The reason I want to do that is that I would like to host the application on Cloud Foundry in a next step. Unfortunately I get the following error when doing it like this.
Dockerfile
FROM nginx:alpine
COPY . /usr/share/nginx/html
EXPOSE 5000
CMD ["nginx -d -p 5000:5000"]
Error
Error starting userland proxy: Bind for 0.0.0.0:5000: unexpected error Permission denied.
From ::
https://docs.docker.com/engine/reference/builder/#expose
EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number
CMD ["nginx -d -p 5000:5000"]
You add your dockerfile
FROM nginx:alpine
its already starts nginx.
after you build from your dockerfile
you should use this on
docker run -d -p 5000:5000 <your_image>
Edit:
If you want to use docker port 80 -> machine port 5000
docker run -d -p 5000:80 <your_image>

NGINX service failure in Dockers on limiting memory and CPU usage

I have one Master and 5 worker nodes, I am using the following command while deploying the nginx service.
It fails-
docker service create --name foo -p 32799:80 -p 32800:443 nginx --limit-cpu 0.5 --limit-memory 512M
On the other hand this works-
docker service create --name foo -p 32799:80 -p 32800:443 nginx
Please let me know how do I reduce my CPU to 1 core and limit memory to 512M
Change your command to the following and try again:
docker service create --limit-cpu 0.5 --limit-memory 512M --name foo -p 32799:80 -p 32800:443 nginx
Anything following the image name is treated as COMMAND and parameters.

Unable to run docker commands

I am running docker using the command
sudo docker -H 0.0.0.0:2375 -d &
I am then using teh dockerjava client to create images and run containers in the following way
DockerClient dockerClient = DockerClientBuilder.getInstance("http://localhost:2375").build();
l
CreateContainerResponse container = dockerClient.createContainerCmd(image_name)
.exec();
dockerClient.startContainerCmd(container.getId()).exec();
This works fine and the docker logs look fine too. But when I try to use any of the docker commands including docker ps, docker images, docker info, all of them fail with the following error
FATA[0000] Get http:///var/run/docker.sock/v1.18/info: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
Using sud also does not solve the problem. I am running docker on unix. Any thoughts?
Using sudo also does not solve the problem. I am running docker on unix. Any thoughts?
You have started up Docker listening on a TCP socket. This means that when the docker client attempts to connect to the default Unix-domain socket, there's nothing there. The error message is pretty clear about that:
dial unix /var/run/docker.sock: no such file or directory.
You need to tell the docker client where to connect, just like you have to provide that information to the DockerClientBuilder class in your code. You can do this (a) using the -H option to the client or (b) using the DOCKER_HOST environment variable.
For example:
$ docker -H http://localhost:2375 ps
$ docker -H http://localhost:2375 pull alpine
Or:
$ export DOCKER_HOST=http://localhost:2375
$ docker ps
$ docker pull alpine

Stop a Nginx Docker container

I am trying to stop a Docker container running Nginx only after there has been no activity in the access.log of that Nginx instance for a period of time.
Is it possible to stop a Docker container from inside the container? The other solution I can think of is to have a cron running on the host OS that checks the /var/lib/docker/aufs/mnt/[container id]/ but I am planning on starting lots of containers and would prefer not to have to keep a list of IDs.
The docker container stops when the main process in the container stops.
I setup a little dockerfile and a start script to show how this could work in your case:
Dockerfile
FROM nginx
COPY start.sh /
CMD ["/start.sh"]
start.sh
#!/bin/bash
nginx &
sleep 20
# replace sleep 20 with your test of inactivity
nginx stop
Build container, run and test
$ docker build -t ng .
$ docker run -d ng
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a373e721da7 ng:latest "/start.sh" 4 seconds ago Up 3 seconds 443/tcp, 80/tcp distracted_colden
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a373e721da7 ng:latest "/start.sh" 16 seconds ago Up 16 seconds 80/tcp, 443/tcp distracted_colden
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$
You could share your docker sock within that docker image and then do any operations necessary.
to share docker sock within the docker image do something like this:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker YOUR_IMAGE
inside ENV vars you will have your container id, example to run within container echo $HOSTNAME
I ran an nginx container and then wasn't able to fire it up again:
nginx: [emerg] bind() to unix:/var/run/nchan.sock failed (98: Address already in use)
The easiest fix was to just "prune":
docker system prune
Docker can run command in your running container using the exec command:
docker exec [-d|--detach[=false]] [--help] [-i|--interactive[=false]] [-t|--tty[=false]] CONTAINER COMMAND [ARG...]

Resources