I have a docker-compose and both nginx and nginx-prometheus-exporter are containers. I put the relevant parts here:
nginx:
container_name: nginx
image: nginx:1.19.3
restart: always
ports:
- 80:80
- 443:443
- "127.0.0.1:8080:8080"
nginx-exporter:
image: nginx/nginx-prometheus-exporter:0.8.0
command:
-nginx.scrape-uri
-http://127.0.0.1:8080/stub_status
I tried http://nginx:8080/stub_status,
nginx:8080/stub_status and
127.0.0.1:8080/stub_status for -nginx.scrape-uri but none of them worked and I got Could not create Nginx Client: failed to get http://127.0.0.1:8080/stub_status: Get "http://127.0.0.1:8080/stub_status": dial tcp 127.0.0.1:8080: connect: connection refused.
Also the localhost:8080/stub_status is available in my VM using curl.
the problem was the missing -.
nginx:
container_name: nginx
image: nginx:1.19.3
restart: always
ports:
- 80:80
- 443:443
- "127.0.0.1:8080:8080"
nginx-exporter:
image: nginx/nginx-prometheus-exporter:0.8.0
command:
- -nginx.scrape-uri
- http://127.0.0.1:8080/stub_status
In my case, I was running nginx-prometheus-exporter in docker. Instead of using http://127.0.0.1:8080/stub_status, find the IP of your host machine (where docker is running) by running below command:
ip addr show docker0
and pass URL in docker run command like this:
-nginx.scrape-uri=http://<host_machine_IP>:8080/stub_status
Note: Change port and server url "/stub_status" in above command as you have configured in nginx
Related
I have several services running in Docker containers, all behind an an Nginx reverse proxy (using nginx-proxy/nginx-proxy). All of the services run on different subdomains, and they are all working correctly with HTTPS etc.
I am now trying to host another container that uses Nginx to serve a static Web site on the domain itself, without a subdomain, but I am struggling to get it to work.
Here is my minimal docker-compose.yml:
version: "3"
services:
example:
image: nginx
expose:
- 80
- 443
restart: unless-stopped
environment:
VIRTUAL_HOST: domain.tld
LETSENCRYPT_HOST: domain.tld
container_name: example
volumes:
- ./content:/usr/share/nginx/html
networks:
default:
external:
name: nginx-proxy
This does not work: it shows a 500 Internal Server Error whether I try to access it through HTTP or HTTPS. If I do the exact same thing but using subdomain.domain.tld for the VIRTUAL_HOST and LETSENCRYPT_HOST environment variables, it works fine for both.
If I add the following to the docker-compose.yml file:
ports:
- "8003:80"
- "8443:443"
...then I can access the site at http://domain.tld:8003, but https://domain.tld:8443 shows a failure to connect and https://domain.tld still shows a 500 error. http://domain.tld redirects to https://domain.tld.
The issue was that I had AAAA records for the root domain, but not the subdomains, and I was using nginx-proxy/acme-companion to automatically generate my SSL certificates.
The nginx-proxy/acme-companion documentation states the following under the ‘Requirements’ heading:
If your (sub)domains have AAAA records set, the host must be publicly reachable over IPv6 on port 80 and 443.
So, per the nginx-proxy/nginx-proxy documentation, to enable IPv6:
You can activate the IPv6 support for the nginx-proxy container by passing the value true to the `ENABLE_IPV6 environment variable:
docker run -d -p 80:80 -e ENABLE_IPV6=true -v /var/run/docker.sock:/tmp/docker.sock:ro nginxproxy/nginx
My final docker-compose.yml looks like this:
version: "3"
services:
example:
image: nginx
expose:
- 80
- 443
restart: unless-stopped
environment:
VIRTUAL_HOST: domain.tld,www.domain.tld
LETSENCRYPT_HOST: domain.tld,www.domain.tld
container_name: example
volumes:
- ./content:/usr/share/nginx/html:ro
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
networks:
default:
external:
name: nginx-proxy
I am trying to connect to a database that has an IP of x.x.x.x from my Docker container
Getting this error
java.net.NoRouteToHostException: No route to host (Host unreachable)
Tried running container using --network=host which has a similar approach to the above attempt
As I mentioned in the comments, here is the sample docker-compose file.
version: '3.7'
services:
entitygraph:
image: entitygraph-by-jar:latest
container_name: entitygraph
restart: always
networks:
- eg-net
ports:
- 9999:8080
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://eg-mysql/customers?useSSL=false
SPRING_PROFILES_ACTIVE: mysql
eg-mysql:
image: mysql:5.7
restart: always
networks:
- eg-net
container_name: eg-mysql
environment:
MYSQL_DATABASE: customers
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
MYSQL_ROOT_PASSWORD:
networks:
eg-net:
name: eg-net
In this file, the application entitygraph is trying to talk to mysql. In my application, the connection string to mysql is as below,
spring.datasource.url=jdbc:mysql://localhost:3306/customers?useSSL=false
So, docker will replace the spring.datasource.url property with the one I specified on my docker-compose file. Note that host:port is eg-mysql, which docker resolves to it's internal IP and will use it to communicate.
I don't know about your application architecture. If I know, I could give you more specific answer to your problem.
I have two Docker containers: node-a, node-b. One of them (node-b) should send http request to other (node-a). I'm starting them with Docker Compose. When I'm trying to up them with Compose I face an error:
Get http://node-a:9098: dial tcp 172.18.0.3:9098: getsockopt: connection refused
EXPOSE is declared in Docker file of a-node:
EXPOSE 9098
docker-compose.yml:
version: '3'
services:
node-a:
image: a
ports:
- 9098:9098
volumes:
- ./:/a-src
depends_on:
- redis
node-b:
image: b
volumes:
- ./:/b-src
depends_on:
- node-a
Forwarding is enabled. I believe a server starts because it works well without Docker.
Where I should pay attention? What could cause a problem?
EDIT:
I've tried to add links but it had no effect:
node-b:
image: b
volumes:
- ./:/b-src
links:
- node-a
depends_on:
- node-a
Also links seemed to be deprecated and does the same thing as depends_on in 2+ version of docker-compose.yml:
docker-compose execute V2 files, it will automatically build a network between all of the containers defined in the file, and every container will be immediately able to refer to the others just using the names defined in the docker-compose.yml file.
Link a container to the service using links. (docker-compose documentation on links).
Example:
node-b:
image: b
volumes:
- ./:/b-src
depends_on:
- node-a
links:
- node-a
A lot of times, I see ports described twice with a colon like in this Docker Compose file from the Docker Networking in Compose page:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
networks:
default:
# Use a custom driver
driver: custom-driver-1
I've often wondered why the "8000:8000" and not simply "8000"
Then I saw this example, which has the two ports different:
version: "3"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
Can someone explain what this port representation means?
The first port is host's port and the second is the remote port (i.e: in the container). That expression bounds the remote port to the local port.
In the example you map container's 8080 port to host's 8080 port, but it's perfectly normal to use different ports (e.g: 48080:8080)
If the 'host' port and the ':' of the publish port is omitted, eg. 'docker run -d -p 3000 myimage'. Docker will auto assign a (high number) host port for you. You can check to see it by running 'docker ps'.
I need to run multiple WordPress containers linked all to a single MySQL container + Nginx Reverse Proxy to easy handle VIRTUAL_HOSTS.
Here is what I'm trying to do (with only one WP for now):
Wordpress (hub.docker.com/_/wordpress/)
Mysql (hub.docker.com/_/mysql/)
Nginx Reverse Proxy (github.com/jwilder/nginx-proxy)
I'm working on OSX and this is what I run on terminal:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker run --name some-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root -d mysql:latest
docker run -e VIRTUAL_HOST=wordpress.mylocal.com --name wordpress --link some-mysql:mysql -p 8080:80 -d wordpress
My Docker is running on 192.168.99.100 and that brings me to a 503 nginx/1.9.12 error ofc.
Then 192.168.99.100:8080 brings me to the WordPress as expected.
But http://wordpress.mylocal.com it's not working; it's not redirecting to 192.168.99.100:8080 and I don't understand what I'm doing wrong.
Any suggestions? Thanks!
First of all I recommend you start using docker-compose , running your containers and finding errors will become much easier.
As for your case it seems that you should be using VIRTUAL_PORT to direct to your container on 8080.
Secondly you cannot have two containers(the nginx-proxy + wordpress) napped to the same port on the host.
Good luck!
One:
Use docker compose.
vi docker-compose.yaml
Two:
paste this into the file:
version: '3'
services:
nginx-proxy:
image: budry/jwilder-nginx-proxy-arm:0.6.0
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs:ro
- confd:/etc/nginx/conf.d
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
environment:
- DEFAULT_HOST=example2.com
networks:
- frontend
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion:stable
restart: always
volumes:
- certs:/etc/nginx/certs:rw
- confd:/etc/nginx/conf.d
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
# - LETSENCRYPT_SINGLE_DOMAIN_CERTS=true
# - LETSENCRYPT_RESTART_CONTAINER=true
- DEFAULT_EMAIL=example#mail.com
networks:
- frontend
depends_on:
- nginx-proxy
#########################################################
..The rest of the containers go here..
#########################################################
networks:
frontend:
driver: bridge
backend:
driver: bridge
volumes:
certs:
html:
vhostd:
confd:
dbdata:
maildata:
mailstate:
maillogs:
Three:
Configure as many docker as you need and configure them to your liking. Here are some examples:
mysql (MariaDB):
mysql:
image: jsurf/rpi-mariadb:latest #MARIADB -> 10 #82eec62cce90
restart: always
environment:
MYSQL_DATABASE: nameExample
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_RANDOM_ROOT_PASSWORD: passwordRoot
MYSQL_ROOT_HOST: '%'
ports:
- "3306:3306"
networks:
- backend
command: --init-file /data/application/init.sql
volumes:
- /path_where_it_will_be_saved_on_your_machine/init.sql:/data/application/init.sql
- /physical_route/data:/var/lib/mysql
nginx-php7.4:
nginx_php:
image: tobi312/php:7.4-fpm-nginx-alpine-arm
hostname: example1.com
restart: always
expose:
- "80"
volumes:
- /physical_route:/var/www/html:rw
environment:
- VIRTUAL_HOST=example1.com
- LETSENCRYPT_HOST=example1.com
- LETSENCRYPT_EMAIL=example1#mail.com
- ENABLE_NGINX_REMOTEIP=1
- PHP_ERRORS=1
depends_on:
- nginx-proxy
- letsencrypt
- mysql
networks:
- frontend
- backend
WordPress:
wordpress:
image: wordpress
restart: always
ports:
- 8080:80
environment:
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=exampleuser
- WORDPRESS_DB_PASSWORD=examplepass
- WORDPRESS_DB_NAME=exampledb
- VIRTUAL_HOST=example2.com
- LETSENCRYPT_HOST=example2.com
- LETSENCRYPT_EMAIL=example2#mail.com
volumes:
- wordpress:/var/www/html #This must be added in the volumes label of step 2
You can find many examples and documentation here
You must be careful since in some examples I put images that are for rpi and it is very likely that they will give problems in amd64 and intel32 systems.You should search and select the images that interest you according to your cpu and operating system
Four:
Run this command to launch all dockers
docker-compose up -d --remove-orphans
"--remove-orphans" serves to remove dockers that are no longer in your docker-compose file
Five:
When you have the above steps done you can come and ask what you want, we will be happy to read your dockerFile without dying trying to read a lot of commands
According to your case I think that the best solution for you is to use an nginx reverse proxy that is listening on the docker socket and can pass request to different virtual hosts.
for example, let's say you have 3 WPs.
WP1 -> port binding to 81:80
WP2 -> port binding to 82:80
WP3 -> port binding to 83:80
for each one of them you should use a docker environment variable with the virtual host name you want to use.
WP1-> foo.bar1
WP2-> foo.bar2
WP3-> foo.bar3
After doing so you should have 3 differnt WP with ports exposed on 81 82 83.
Now download and start this nginx docker container (reverse proxy) here
it should listen on the docker socket and retrives all data coming to you machine on port 80.
and when you started the WP container and by the environment variable that you provide he will be able to detect which request shouuld get to which WP instance...
This is an example of how you should run one of you WP docker images
> docker run -e VIRTUAL_HOST=foo.bar1.com -p 81:80 -d wordpres:tag
In this case the virtual host will be the virtual host coming from the http request