The idea is simple, I need to send a signal from a container to another one to restart nginx.
Connect to the nginx container from the first one in ssh is a good solution?
Do you have other recommended ways for this?
I don't recommend installing ssh, Docker containers are not virtual machines, And should respect microservices architecture to benefit from many advantages it provides.
In order to send signal from one container to another, You can use docker API.
First you need to share /var/run/docker.sock between required containers.
docker run -d --name control -v /var/run/docker.sock:/var/run/docker.sock <Control Container>
to send signal to a container named nginx you can do the following:
echo -e "POST /containers/nginx/kill?signal=HUP HTTP/1.0\r\n" | \
nc -U /var/run/docker.sock
Another option is using a custom image, with a custom script, that checks nginx config files and if the hash is changed sends reload signal. This way, each time you change config, nginx will automatically reload, or You can reload manually using comments. these kind of scripts are common among kubernetes users. Following is an example:
nginx "$#"
oldcksum=`cksum /etc/nginx/conf.d/default.conf`
inotifywait -e modify,move,create,delete -mr --timefmt '%d/%m/%y %H:%M' --format '%T' \
/etc/nginx/conf.d/ | while read date time; do
newcksum=`cksum /etc/nginx/conf.d/default.conf`
if [ "$newcksum" != "$oldcksum" ]; then
echo "At ${time} on ${date}, config file update detected."
oldcksum=$newcksum
nginx -s reload
fi
done
Don't forget to install inotifywait package.
Related
In the docker vignette/documentation, they give an example with a shiny app, but don't exactly specify what their parameters mean. Some of them are self explanatory, but others aren't. More specifically:
https://rstudio.github.io/renv/articles/docker.html
RENV_PATHS_CACHE_HOST=/opt/local/renv/cache
RENV_PATHS_CACHE_CONTAINER=/renv/cache
docker run --rm \
-e "RENV_PATHS_CACHE=${RENV_PATHS_CACHE_CONTAINER}" \
-v "${RENV_PATHS_CACHE_HOST}:${RENV_PATHS_CACHE_CONTAINER}" \
-p 14618:14618 \
R -s -e 'renv::restore(); shiny::runApp(host = "0.0.0.0", port = 14618)'
What is RENV_PATHS_CACHE_HOST?
And is RENV_PATHS_CACHE_CONTAINER the location of where my cache will be upon running the image instance/container?
I'm not entirely sure how to use this example, but feel I'll need it.
The example here tries to demonstrate how one might mount an renv cache from the host filesystem on to a Docker container.
In this case, RENV_PATHS_CACHE_HOST points to a (theoretical) cache directory on the host filesystem, at /opt/local/renv/cache, whereas RENV_PATHS_CACHE_CONTAINER points to the location in the container where the host cache will be visible.
I have deployed a Node.js app on a Google compute instance via a Docker container. Is there a recommended way to pass the GOOGLE_APPLICATION_CREDENTIALS to the docker container?
I see the documentation states that GCE has Application Default Credentials (ADC), but these are not available in the docker container. (https://cloud.google.com/docs/authentication/production)
I am a bit new to docker & GCP, so any help would be appreciated.
Thank you!
So, I could find this documentation on where you can inject your GOOGLE_APPLICATION_CREDENTIALS into a docker in order to test cloud run locally, I know that this is not cloud run, but I believe that the same command could be used in order to inject your credentials to the container.
As I know that a lot of the times the community needs the steps and commands as the links could change and information also could change I will copy the steps needed in order to inject the credentials.
Refer to Getting Started with Authentication for instructions
on generating, retrieving, and configuring your Service Account
credentials.
The following Docker run flags inject the credentials and
configuration from your local system into the local container:
Use the --volume (-v) flag to inject the credential file into the
container (assumes you have already set your
GOOGLE_APPLICATION_CREDENTIALS environment variable on your machine):
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro
Use the --environment (-e) flag to set the
GOOGLE_APPLICATION_CREDENTIALS variable inside the container:
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
Optionally, use this fully configured Docker run command:
PORT=8080 && docker run \
-p 9090:${PORT} \
-e PORT=${PORT} \
-e K_SERVICE=dev \
-e K_CONFIGURATION=dev \
-e K_REVISION=dev-00001 \
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json \
-v $GOOGLE_APPLICATION_CREDENTIALS:/tmp/keys/FILE_NAME.json:ro \ gcr.io/PROJECT_ID/IMAGE
Note that the path
/tmp/keys/FILE_NAME.json
shown in the example above is a reasonable location to place your
credentials inside the container. However, other directory locations
will also work. The crucial requirement is that the
GOOGLE_APPLICATION_CREDENTIALS environment variable must match the
bind mount location inside the container.
Hope this works for you.
I am new in docker. I have of applications running on multiple container. Now, I would like to publish all my apps. What I am planning to do is do make a cluster containning all my application. I want at least 4 containers.
Nginx container that is facing internet like a reverse proxy. He is responsible to redirect traffic to other containers, since there are not directly accessible through internet.
Node_js container that publishes a web in nodejs (http://www.node-app.me).
java_EE container that publishes Java EE application (http://www.java_ee-app.me).
Django container that publishes a Django application(http://www.django-app.me).
This is the idear I have, but I don't no how to set nginx container to play the proxy role and make the request to the correct container so that if user send a request like http://www.node-app.me, the container nginx will return result from Node_js, and so on. Can you please give idear on where to start ?
The setup could look like this (sorry I am not very good at drawing) :
Unless you have a specific need for nginx, I suggest you use Træfik to do the reverse proxy. It can be configured to dynamically pick up reverse proxy rules via labels on your containers. Here's a basic example.
First, create a common network for Træfik and your three containers.
docker network create traefik
Run Træfik with port 80 exposed and the docker backend enabled.
docker run --name traefik \
-p 80:80 \
-v /var/run/docker.sock:/var/run/docker.sock \
--network traefik \
traefik:1.2.3-alpine \
--entryPoints='Name:http Address::80' \
--docker \
--docker.watch
Run your three services with the appropriate labels. Make sure they share a common network with Træfik so that Træfik can reach it. The node_js one might look something like this.
docker run --name node_js \
--network traefik \
--label 'traefik.frontend.rule=Host:www.node-app.me' \
--label 'traefik.frontend.entryPoints=http' \
--label 'traefik.port=80' \
--label 'traefik.protocol=http' \
your_node_js_image
Træfik will dynamically create a frontend rule that matches on the Host header for www.node-app.me when it sees this container running. The traefik.port and traefik.protocol labels let Træfik know how to communicate with your container.
See the documentation for Træfik's Docker backend for more options and details.
We are using Nginx as a reverse proxy for docker-cloud services. A script is implemented to update the config file of Nginx whenever new service deploys on docker cloud or if service gets new url on docker-cloud.
The Nginx and the script have been run in a docker container separately.
The Nginx config file is mounted in Host(ECS). After updating the config file using script, it needs to reload the Nginx in order to apply the changes.
First, I would like to know if this is the best way of updating Nginx config file and also what is the best way to reload the Nginx without any downtime?
Shall I recreate the Nginx container after each update? if so, how?
or it's fine to reload the Nginx from Host by monitoring the changes in the config file(using a script) and reload it with below command?
docker exec NginxcontainerID | nginx -s reload
Shall I recreate the Nginx container after each update? if so, how?
No, You just need to reload nginx service most of the time.
You can use:
docker exec nginxcontainername/id nginx -s reload
or
docker kill -s HUP nginxcontainername/id
Another option would be using a custom image and check nginx config checksum and reload nginx when ever it changes. Example script:
nginx "$#"
oldcksum=`cksum /etc/nginx/conf.d/default.conf`
inotifywait -e modify,move,create,delete -mr --timefmt '%d/%m/%y %H:%M' --format '%T' \
/etc/nginx/conf.d/ | while read date time; do
newcksum=`cksum /etc/nginx/conf.d/default.conf`
if [ "$newcksum" != "$oldcksum" ]; then
echo "At ${time} on ${date}, config file update detected."
oldcksum=$newcksum
nginx -s reload
fi
done
You need to install inotifywait package.
I'm trying to set up a docker dnsmasq container so that I can have all my docker containers look up the domain names rather than having hard-coded IPs (if they are on the same host). This fixes an issue with the fact that one cannot alter the /etc/hosts file in docker containers, and this allows me to easily update all my containers in one go, by altering a single file that the dnsmasq container references.
It looks like someone has already done the hard work for me and created a dnsmasq container. Unfortunately, it is not "working" for me. I wrote a bash script to start the container as shown below:
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p='127.0.0.1:53:5353/udp' \
-d sroegner/dnsmasq
Before running that, I created the dnsmasq.hosts directory and inserted a single file within it called hosts.txt with the following contents:
192.168.1.3 database.mydomain.com
Unfortunately whenever I try to ping that domain from within:
the host
The dnsmasq container
another container on the same host
I always receive the ping: unknown host error message.
I tried starting the dnsmasq container without daemon mode so I could debug its output, which is below:
dnsmasq: started, version 2.59 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n DHCP TFTP conntrack IDN
dnsmasq: reading /etc/resolv.dnsmasq.conf
dnsmasq: using nameserver 8.8.8.8#53
dnsmasq: read /etc/hosts - 7 addresses
dnsmasq: read /dnsmasq.hosts//hosts.txt - 1 addresses
I am guessing that I have not specified the -p parameter correctly when starting the container. Can somebody tell me what it should be for other docker containers to lookup the DNS, or whether what I am trying to do is actually impossible?
The build script for the docker dnsmasq service needs to be changed in order to bind to your server's public IP, which in this case is 192.168.1.12 on my eth0 interface
#!/bin/bash
NIC="eth0"
name="dnsmasq_"
timenow=$(date +%s)
name="$name$timenow"
MY_IP=$(ifconfig $NIC | grep 'inet addr:'| grep -v '127.0.0.1' | cut -d: -f2 | awk '{ print $1}')
sudo docker run \
-v="$(pwd)/dnsmasq.hosts:/dnsmasq.hosts" \
--name=$name \
-p=$MY_IP:53:5353/udp \
-d sroegner/dnsmasq
On the host (in this case ubuntu 12), you need to update the resolv.conf or /etc/network/interfaces file so that you have registered your public IP (eth0 or eth1 device) as the nameserver.
You may want to set a secondary nameserver to be google for whenever the container is not running, by changing the line to be dns-nameservers xxx.xxx.xxx.xxx 8.8.8.8 E.g. there is no comma or another line.
You then need to restart your networking service sudo /etc/init.d/networking restart if you updated the /etc/network/interfaces file so that this auto updates the /etc/resolve.conf file that docker will copy to the container during the build.
Now restart all of your containers
sudo docker stop $CONTAINER_ID
sudo docker start $CONTAINER_ID
This causes their /etc/resolv.conf files update so they point to the new nameserver settings.
DNS lookups in all your docker containers (that you built since making the changes) should now work using your dnsmasq container!
As a side note, this means that docker containers on other hosts can also take advantage of your dnsmasq service on this host as long as their host's nameserver settings is set to using this server's public IP.