I generate file's url by Minio and return for RestController with 302 HttpCode, but I need use external address with Nginx location. Minio temp url has X-Amz-Signature header and url for service contains in signature that why I can't redirect user by nginx.
For examples:
host: minio-host
port: minio-port
bucket: file
filename: 333/test.jpg
minio's url: http://minio-host:minio-port/file/333/test.jpg
But, I want to use nginx location (http://my-host/minio)
If I use nginx, I can't get file, because X-Amz-Signature contains host = http://minio-host:minio-port
What should I do to use nginx?
I started minio and nginx in docker
I tried to disabled the header change in nginx
I had similar issues on local.
Solution 1, is that you need to create the presignedUrl through the nginx with your public URL, not through one of the docker-network.
Solution 2, generate the link through the mc client. This worked also. But hardly you can generate every link through the mc, depends on what you are doing.
You can also refere to my answer to a similar question: https://stackoverflow.com/questions/74656068/minio-how-to-get-right-link-to-display-image-on-html/74717273#74717273:%7E:text=If%20you%20have%20Minio%20running%20in%20a%20container%2C%20it%20is%20always%20a%20mess%20with%20127.0.0.1%20or%20localhost:~:text=Try%20to%20generate%20the%20link%20with%20the%20minioclient.
Last. I used an sample of docker-compose with quay.io image and nginx.
Here even the links greated through the browser UI, didnt worked.
This issue i solved using the bitnami/minio image.
try this:
version: '3.7'
services:
minio:
container_name: minio
image: bitnami/minio
ports:
- '9000:9000'
- '9001:9001'
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
healthcheck:
test: ["CMD", "curl", "-f",
"http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
networks:
- mynetwork
volumes:
- miniodata:/data
networks:
mynetwork
volumes:
miniodata:
You can also put an entry in your hosts file on your host.
127.0.0.1 minio.local
and then use the UI with Http://minio.local:9001
This should generate the right presignedURL.
If that all works fine you can create a multiple node installation and use nginx as Loadbalancer.
I'd like to know if it's possible to use nginx with docker compose as an api gateway / reverse proxy / ssl termination point without exposing any ports on the containers behind it. I.e. I want to use only the intranet created by docker compose when the containers are linked to communicate past nginx. Ideally the only publicly accessible port will be port 443 (ssl) on nginx. Is this doable? Or do I have to expose ports on my containers?
Yes is doable.
Just define your application in one container and nginx in another container, both in the same docker-compose.yml. Link them. And only expose the 443 port in nginx container.
docker-compose.yml
nginx:
image: nginx
links:
- node1:node1
- node2:node2
- node3:node3
ports:
- "443:443"
node1:
build: ./node
node2:
build: ./node
node3:
build: ./node
More info: http://anandmanisankar.com/posts/docker-container-nginx-node-redis-example/
Regards
I am kind of in over my head with my current small project.
(although it should not be that hard)
I am trying to run multiple webpages using docker on my Pi (for testing purposes) which should all be reachable using the PI's IP.
I currently run a minimL LIGHTTPD: (based on the resin/rpi-raspbian image)
docker run -d -v <testconfig>:/etc/lighttpd -p <pi-ip>:8080:80 <image name>
(this server is reachable using the browser on pi and on other computers in the network)
For nginx I run another container with with a simple config
(starting with http://nginx.org/en/docs/beginners_guide.html),
containing a webpage and images to test the container config.
this container is reachable using <pi-ip>:80
then I tried to add a proxy to the locations:
(I played around so now there are 3 locations for the same redirect)
location /prox1/{
proxy_pass http://<pi-ip>:8080
}
location /prox2/{
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://<pi-ip>:8080
}
location /prox3/{
fastcgi_pass <pi-ip>:8080;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string
}
Version 1&2 give a 404 (I tried adding a rewrite, but then I ´nginx redirected on itself due to the /prox1/ being cut).
Version 3 yields a timeout.
Now I am not sure if I still have to dig on the nginx side, or I have to add a connection on the docker side between the containers.
PS: the Pi is running ArchForArm (using Xfce as desktop) because I couldn't find docker-compose in the raspberian repository.
-- EDIT ---:
I currently start everything manually. (so no compose file)
the LIGHTTPD is started with:
docker run -d --name mylighttpd -v <testconfig>:/etc/lighttpd -p <pi-ip>:8080:80 <image name>
if I understood it correctly it is now listening on the local network (in the range of <pi-ip>) port 8080, which represents the test web-servers port 80. (I have added ..name so it is easier to stop it.)
the nginx is started like:
docker run --name mynginx --rm -p <pi-ip>:80:80 -v <config>:/data <image name>
The 8080 was added in the expose in the Docker file.
I current think I misunderstood the connection for two clients on the same machine, and should add a Virtual network, I am currently trying to find some docks there.
PS: I am not using the already existing nginx-zeroconf from the repo because it tells me it cant read the installed docker version. (and the only example for using that with composer also needs another container which seems unavailable for my architecture.)
-- edit2 --:
For the simple proxy_pass the problem could be the URL.
I added a deeper folder "prox1" in the "www" folder, containing an index file, and that one is schown when i ask for the page.
It seems like <pi-ip>:80/prox1/
is redirected to <pi-ip>:8080/prox1/
but if I try rewrite it (inside "location /prox1/") it seems to first delete the prox1, and then decides it now is part of the original location.
<pi-ip>:80/
PS: I am aware that it might be a better design to place the system inside another connection than "bridge" and only expose the proxy, but i am trying to learn this stuff in small steps.
-- edit3 --:
Trying compose now, but it seems I have encounters another part I don't understand (why I wanted to get it work without compose first).
I try to follow http://docs.master.dockerproject.org/compose/compose-file/#ipv4-address-ipv6-address
networks:
backbone:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/16
gateway: 172.16.238.1
services:
nginx:
image: <nginx-image>
ports: 80:80
volumes:
- <config>:/data
depends_on:
- lighttpd
networks:
backbone:
ipv4_address: 172.16.238.2
lighttpd:
image: <lighttpd-image>
ports: 8080:80
volumes:
- <testconfig>:/etc/lighttpd
networks:
backbone:
ipv4_address: 172.16.238.3
Now I have to find out why i get "User specific IP address is supported only when connecting to networks with user configured subnets", I assume the main networks block creates a network called "backbone".
-- edit4 --:
It seems ip blocks have to be written different to all the docks I have seen, the correct form is:
...
networks:
backbone:
ipv4_address: 172.16.0.2/16
...
now I have to figure out how to drop the part of the URL, and I am good to go.
The core problem seems to have been missing nginx parameter proxy_redirect, i found rambling trough the docks, the current nginx.conf is:
(/data/www contains a index.html with a relative link to some images in /data/images)
worker_processes auto;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
root /data/www;
}
location /images/ {
root /data;
}
location /prox0/{
proxy_pass http://lighttpd:80;
proxy_redirect default;
proxy_buffering off;
}
}
}
manual starting on local Ip seems to work, but docker-compose is easyer:
(if compose is not used replace lighttpd:80 with the ip & port used for starting the server.)
networks:
backbone:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.16.238.0/16
gateway: 172.16.238.1
services:
nginx:
image: <nginx-image>
ports: 80:80
volumes:
- <config>:/data
depends_on:
- lighttpd
networks:
backbone:
ipv4_address: 172.16.0.2
lighttpd:
image: <lighttpd-image>
ports: 8080:80
volumes:
- <testconfig>:/etc/lighttpd
networks:
backbone:
ipv4_address: 172.16.0.3
I am attempting to setup an nginx container that serves as a proxy to another container I have setup. I would like to automate this setup as I need to deploy a similar setup across several servers. For this I am using Ansible.
Here is my nginx.conf:
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
proxy_pass http://192.168.1.14:9000;
}
}
}
Here is the relevant part of my Ansible YAML file:
- name: Install Nginx
docker:
name: nginx
image: nginx
detach: True
ports:
- 8080:8080
volumes:
- /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
When I first run my playbook, nginx is running but is not bound to 8080 as seen here:
6a4f610e86d nginx "nginx -g 'daemon off" 35 minutes ago Up Less than a second 80/tcp, 443/tcp nginx
However, if I run the nginx container directly with:
docker run -d -v /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro -p 8080:8080 nginx
nginx and my proxy runs as expected and is listening on 8080:
c3a46421045c nginx "nginx -g 'daemon off" 2 seconds ago Up 1 seconds 80/tcp, 443/tcp, 0.0.0.0:8080->8080/tcp determined_swanson
Any idea why it works one way but not the other?
Update
Per the guidance given in the selected answer, I updated my YAML file thusly:
- name: Install Nginx
docker:
name: nginx
image: nginx
detach: True
ports:
- 8080:8080
expose:
- 8080
volumes:
- /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
First, you need to make sure your nginx image EXPOSE the port 8080, and you can specify directly in your ansible yaml file:
expose
(added in 1.5)
List of additional container ports to expose for port mappings or links. If the port is already exposed using EXPOSE in a Dockerfile, you don't need to expose it again.
Then, the only other difference I see when considering the Ansible docker module is that the port are inside double-quotes:
ports:
- "8080:9000"
Also, if you want to prexypass to another container in the same docker daemon, you might want to use a link instead of a fixed IP address.
links:
- "myredis:aliasedredis"
That way, your nginx.conf includes a fixed rule:
proxy_pass http://aliasedredis:9000;
I have recently started migrating to Docker 1.9 and Docker-Compose 1.5's networking features to replace using links.
So far with links there were no problems with nginx connecting to my php5-fpm fastcgi server located in a different server in one group via docker-compose. Newly though when I run docker-compose --x-networking up my php-fpm, mongo and nginx containers boot up, however nginx quits straight away with [emerg] 1#1: host not found in upstream "waapi_php_1" in /etc/nginx/conf.d/default.conf:16
However, if I run the docker-compose command again while the php and mongo containers are running (nginx exited), nginx starts and works fine from then on.
This is my docker-compose.yml file:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
This is my default.conf for nginx:
server {
listen 80;
root /var/www/test;
error_log /dev/stdout debug;
access_log /dev/stdout;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
# Referencing the php service host (Docker)
fastcgi_pass waapi_php_1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# We must reference the document_root of the external server ourselves here.
fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
How can I get nginx to work with only a single docker-compose call?
This can be solved with the mentioned depends_on directive since it's implemented now (2016):
version: '2'
services:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
depends_on:
- mongo
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
Successfully tested with:
$ docker-compose version
docker-compose version 1.8.0, build f3628c7
Find more details in the documentation.
There is also a very interesting article dedicated to this topic: Controlling startup order in Compose
There is a possibility to use "volumes_from" as a workaround until depends_on feature (discussed below) is introduced. All you have to do is change your docker-compose file as below:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
volumes_from:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
One big caveat in the above approach is that the volumes of php are exposed to nginx, which is not desired. But at the moment this is one docker specific workaround that could be used.
depends_on feature
This probably would be a futuristic answer. Because the functionality is not yet implemented in Docker (as of 1.9)
There is a proposal to introduce "depends_on" in the new networking feature introduced by Docker. But there is a long running debate about the same # https://github.com/docker/compose/issues/374 Hence, once it is implemented, the feature depends_on could be used to order the container start-up, but at the moment, you would have to resort to one of the following:
make nginx retry until the php server is up - I would prefer this one
use volums_from workaround as described above - I would avoid using this, because of the volume leakage into unnecessary containers.
If you are so lost for read the last comment. I have reached another solution.
The main problem is the way that you named the services names.
In this case, if in your docker-compose.yml, the service for php are called "api" or something like that, you must ensure that in the file nginx.conf the line that begins with fastcgi_pass have the same name as the php service. i.e fastcgi_pass api:9000;
Lets say the php service name is php_service then the code will be:
In the file docker-compose.yml
php_service:
build:
dockerfile: ./docker/php/Dockerfile
In the file nginx.conf
location ~ \.php$ {
fastcgi_pass php_service:9000;
fastcgi_param SCRIPT_FILENAME$document_root$fastcgi_script_name;
include fastcgi_params;
}
You can set the max_fails and fail_timeout directives of nginx to indicate that the nginx should retry the x number of connection requests to the container before failing on the upstream server unavailability.
You can tune these two numbers as per your infrastructure and speed at which the whole setup is coming up. You can read more details about the health checks section of the below URL:
http://nginx.org/en/docs/http/load_balancing.html
Following is the excerpt from http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server
max_fails=number
sets the number of unsuccessful attempts to communicate with the
server that should happen in the duration set by the fail_timeout
parameter to consider the server unavailable for a duration also set
by the fail_timeout parameter. By default, the number of unsuccessful
attempts is set to 1. The zero value disables the accounting of
attempts. What is considered an unsuccessful attempt is defined by the
proxy_next_upstream, fastcgi_next_upstream, uwsgi_next_upstream,
scgi_next_upstream, and memcached_next_upstream directives.
fail_timeout=time
sets the time during which the specified number of unsuccessful
attempts to communicate with the server should happen to consider the
server unavailable; and the period of time the server will be
considered unavailable. By default, the parameter is set to 10
seconds.
To be precise your modified nginx config file should be as follows (this script is assuming that all the containers are up by 25 seconds at least, if not, please change the fail_timeout or max_fails in below upstream section):
Note: I didn't test the script myself, so you could give it a try!
upstream phpupstream {
server waapi_php_1:9000 fail_timeout=5s max_fails=5;
}
server {
listen 80;
root /var/www/test;
error_log /dev/stdout debug;
access_log /dev/stdout;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
# Referencing the php service host (Docker)
fastcgi_pass phpupstream;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# We must reference the document_root of the external server ourselves here.
fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Also, as per the following Note from docker (https://github.com/docker/docker.github.io/blob/master/compose/networking.md#update-containers), it is evident that the retry logic for checking the health of the other containers is not docker's responsibility and rather the containers should do the health check themselves.
Updating containers
If you make a configuration change to a service and run docker-compose
up to update it, the old container will be removed and the new one
will join the network under a different IP address but the same name.
Running containers will be able to look up that name and connect to
the new address, but the old address will stop working.
If any containers have connections open to the old container, they
will be closed. It is a container's responsibility to detect this
condition, look up the name again and reconnect.
My problem was that I forgot to specify network alias in
docker-compose.yml in php-fpm
networks:
- u-online
It is works well!
version: "3"
services:
php-fpm:
image: php:7.2-fpm
container_name: php-fpm
volumes:
- ./src:/var/www/basic/public_html
ports:
- 9000:9000
networks:
- u-online
nginx:
image: nginx:1.19.2
container_name: nginx
depends_on:
- php-fpm
ports:
- "80:8080"
- "443:443"
volumes:
- ./docker/data/etc/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./docker/data/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./src:/var/www/basic/public_html
networks:
- u-online
#Docker Networks
networks:
u-online:
driver: bridge
I believe Nginx dont take in account Docker resolver (127.0.0.11), so please, can you try adding:
resolver 127.0.0.11
in your nginx configuration file?
I had the same problem because there was two networks defined in my docker-compose.yml: one backend and one frontend.
When I changed that to run containers on the same default network everything started working fine.
I found solution for services, that may be disabled for local development. Just use variables, that prevents emergency shutdown and works after service is available.
server {
location ^~ /api/ {
# other config entries omitted for breavity
set $upstream http://api.awesome.com:9000;
# nginx will now start if host is not reachable
fastcgi_pass $upstream;
fastcgi_index index.php;
}
}
source: https://sandro-keil.de/blog/let-nginx-start-if-upstream-host-is-unavailable-or-down/
Had the same problem and solved it. Please add the following line to docker-compose.yml nginx section:
links:
- php:waapi_php_1
Host in nginx config fastcgi_pass section should be linked inside docker-compose.yml nginx configuration.
At the first glance, I missed, that my "web" service didn't actually start, so that's why nginx couldn't find any host
web_1 | python3: can't open file '/var/www/app/app/app.py': [Errno 2] No such file or directory
web_1 exited with code 2
nginx_1 | [emerg] 1#1: host not found in upstream "web:4044" in /etc/nginx/conf.d/nginx.conf:2
Two things worth to mention:
Using same network bridge
Using links to add hosts resol
My example:
version: '3'
services:
mysql:
image: mysql:5.7
restart: always
container_name: mysql
volumes:
- ./mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: tima#123
network_mode: bridge
ghost:
image: ghost:2
restart: always
container_name: ghost
depends_on:
- mysql
links:
- mysql
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: root
database__connection__password: xxxxxxxxx
database__connection__database: ghost
url: https://www.itsfun.tk
volumes:
- ./ghost-data:/var/lib/ghost/content
network_mode: bridge
nginx:
image: nginx
restart: always
container_name: nginx
depends_on:
- ghost
links:
- ghost
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/letsencrypt:/etc/letsencrypt
network_mode: bridge
If you don't specify a special network bridge, all of them will use the same default one.
Add the links section to your nginx container configuration.
You have to make visible the php container to the nginx container.
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
links:
- php:waapi_php_1
With links there is an order of container startup being enforced. Without links the containers can start in any order (or really all at once).
I think the old setup could have hit the same issue, if the waapi_php_1 container was slow to startup.
I think to get it working, you could create an nginx entrypoint script that polls and waits for the php container to be started and ready.
I'm not sure if nginx has any way to retry the connection to the upstream automatically, but if it does, that would be a better option.
You have to use something like docker-gen to dynamically update nginx configuration when your backend is up.
See:
https://hub.docker.com/r/jwilder/docker-gen/
https://github.com/jwilder/nginx-proxy
I believe Nginx+ (premium version) contains a resolve parameter too (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream)
Perhaps the best choice to avoid linking containers issues are the docker networking features
But to make this work, docker creates entries in the /etc/hosts for each container from assigned names to each container.
with docker-compose --x-networking -up is something like
[docker_compose_folder]-[service]-[incremental_number]
To not depend on unexpected changes in these names you should use the parameter
container_name
in your docker-compose.yml as follows:
php:
container_name: waapi_php_1
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
Making sure that it is the same name assigned in your configuration file for this service. I'm pretty sure there are better ways to do this, but it is a good approach to start.
My Workaround (after much trial and error):
In order to get around this issue, I had to get the full name of the 'upstream' Docker container, found by running docker network inspect my-special-docker-network and getting the full name property of the upstream container as such:
"Containers": {
"39ad8199184f34585b556d7480dd47de965bc7b38ac03fc0746992f39afac338": {
"Name": "my_upstream_container_name_1_2478f2b3aca0",
Then used this in the NGINX my-network.local.conf file in the location block of the proxy_pass property: (Note the addition of the GUID to the container name):
location / {
proxy_pass http://my_upsteam_container_name_1_2478f2b3aca0:3000;
As opposed to the previously working, but now broken:
location / {
proxy_pass http://my_upstream_container_name_1:3000
Most likely cause is a recent change to Docker Compose, in their default naming scheme for containers, as listed here.
This seems to be happening for me and my team at work, with latest versions of the Docker nginx image:
I've opened issues with them on the docker/compose GitHub here
this error appeared to me because my php-fpm image enabled cron, and I have no idea why
In my case it was nginx: [emerg] host not found in upstream as well, so I managed to solve it, by adding depends_on directive to nginx service in docker-compose.yml file.
(new to nginx)
In my case it was wrong folder name
For config
upstream serv {
server ex2_app_1:3000;
}
make sure the app folder is in ex2 folder:
ex2/app/...