i want when i write 10.10.0.0 in browser, it takes me index page.but it doesnt
I tried
server {
listen 8080;
server_name 10.10.0.0;
return 301 http://localhost:8080/index.html;
}
I didn't have time to test, but try following
server {
listen 8080;
server_name 10.10.0.0;
location / {
return 301 http://localhost:8080/index.html;
}
}
Let us try to dissect it,
Does this work without docker?
If yes, Let us look at what ports are you exposing to outside world from your container
For example: in docker-compose you need to expose it like below,
NOTE: see "ports" NOT "expose", which tells: "For external world I am exposing 8080, from there I will route internally to port 80 in the container"
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
command: /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/nginx.conf
container_name: my_nginx_server
tty: true
expose:
- "80" #This is internal to container network
ports:
- "8080:80" #HOST:CONTAINER
if you are using command-line, then it should have "-p 8080:80" while running container
If it does not work without docker, check ngnix<-->uwsgi (or whatever) <-->your_app
settings.
Please share more info, dockerfile, docker-compose.yml
I have nginx in a docker container, and a nodejs webapp in another docker container.
The nodejs server is reachable from the host server on port 8080.
The nginx docker container is listening to port 80 (will do the certificate later, first this base must be working).
And now I want a subdomain to be forwarded to this 8080 nodejs app. lets say app1.example.com
From outside I can reach the app by the server ip (or hostname) and port 8080 but not on app1.example.com. And it does work on app1.example.com:8080 (I have opened up port 8080 on the host server).
I get a bad gateway nginx message when approaching the app1.example.com So I get in the first nginx container, but how do i get back to the host server to proxy pass it to the port 8080 of the host server (and not port 8080 of the nginx container). looking for the reverse EXPOSE syntax.
the main problem is, of course if I use the ip and port 127.0.0.1:8080 it will try on the nginx container....
So how do I let the nginx container route back to the host 127.0.0.1:8080?
I have tried 0.0.0.0 and defining an upstream, actually been googling a lot, and have tried a lot of configurations... but not yet found a working one....
Edit
Just found out, this command of docker might help:
sudo docker network inspect bridge
This shows the Ip address used inside the containers (in my case 172.17..0.2), but not sure this address stays the same every time the docker will restart... (e.g. server reboot)
Edit
Following alkaline answer I now have (but still not working):
my docker-compose.yml file:
version: "2"
services:
nginx:
container_name: nginx
image: nginx_img
build: ../docker-nginx-1/
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
and my nginx (skipped the include in the conf.d folder for simplicity):
worker_processes 1;
events { worker_connections 1024; }
http {
sendfile on;
upstream upsrv {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv;
}
}
}
edit 31-08-2016
this might be the problem, the name is not backbone, but called after the folder started the service from:
sudo docker network ls
out puts:
NETWORK ID NAME DRIVER SCOPE
1167c2b0ec31 bridge bridge local
d06ffaf26fe2 dockerservices1_backbone bridge local
5e4ec13d790a host host local
7d1f8c32f259 none null local
edit 01-09-2016
It might be caused by the way I have my nginx docker container setup?
this is the docker file I used:
############################################################
# Dockerfile to build Nginx Installed Containers
# Based on Ubuntu
############################################################
# Set the base image to Ubuntu
FROM ubuntu
# File Author / Maintainer
MAINTAINER Maintaner Name
# Install Nginx
# Add application repository URL to the default sources
# RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.list
# Update the repository
RUN apt-get update
# Install necessary tools
RUN apt-get install -y nano wget dialog net-tools
# Download and Install Nginx
RUN apt-get install -y nginx
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy a configuration file from the current directory
ADD nginx.conf /etc/nginx/
# Append "daemon off;" to the beginning of the configuration
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# Expose ports
EXPOSE 80
# Set the default command to execute
# when creating a new container
CMD service nginx start
My final solution 1th sept. 2016
I used this compose file now:
version: "2"
services:
nginx:
image: nginx
container_name: nginx
volumes:
- ./nginx-configs:/etc/nginx/conf.d
ports:
- "80:80"
networks:
- backbone
nodejs:
container_name: nodejs
image: merites/docker-simple-node-server
build: ../docker-simple-node-server/
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In the project folder, from which you run docker-compose up -d, I added a folder named nginx-configs. This folder will 'override' all the files in the nginx container named /etc/nginx/conf.d
Therefor I copied the default.cfg from the nginx container before I added this volume mount. using the command:
docker exec -t -i container_name /bin/bash
and than cat /etc/nginx/conf.d/default.conf
and added the same default.conf in the project folder with nginx configs.
Besides the default I added app1.conf with this content:
upstream upsrv1 {
server nodejs:8080;
}
server {
listen 80;
server_name app1.example.com;
location / {
proxy_pass http://upsrv1;
}
}
This way, I can easily add a second app... third and so on.
So the basics is working now.
Here's a best practice. Only expose port 80 outside of the host. The nodejs app can be in a private network only accessible through nginx.
version: "2"
services:
nginx:
...
ports:
- "80:80"
networks:
- backbone
nodejs:
...
networks:
- backbone
expose:
- 8080
networks:
backbone:
driver: bridge
In your nginx.conf file, the upstream servers can be listed as nodejs:8080. The docker daemon will resolve it to the correct internal ip.
I am attempting to setup an nginx container that serves as a proxy to another container I have setup. I would like to automate this setup as I need to deploy a similar setup across several servers. For this I am using Ansible.
Here is my nginx.conf:
events {
worker_connections 1024;
}
http {
server {
listen 8080;
location / {
proxy_pass http://192.168.1.14:9000;
}
}
}
Here is the relevant part of my Ansible YAML file:
- name: Install Nginx
docker:
name: nginx
image: nginx
detach: True
ports:
- 8080:8080
volumes:
- /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
When I first run my playbook, nginx is running but is not bound to 8080 as seen here:
6a4f610e86d nginx "nginx -g 'daemon off" 35 minutes ago Up Less than a second 80/tcp, 443/tcp nginx
However, if I run the nginx container directly with:
docker run -d -v /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro -p 8080:8080 nginx
nginx and my proxy runs as expected and is listening on 8080:
c3a46421045c nginx "nginx -g 'daemon off" 2 seconds ago Up 1 seconds 80/tcp, 443/tcp, 0.0.0.0:8080->8080/tcp determined_swanson
Any idea why it works one way but not the other?
Update
Per the guidance given in the selected answer, I updated my YAML file thusly:
- name: Install Nginx
docker:
name: nginx
image: nginx
detach: True
ports:
- 8080:8080
expose:
- 8080
volumes:
- /etc/docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
First, you need to make sure your nginx image EXPOSE the port 8080, and you can specify directly in your ansible yaml file:
expose
(added in 1.5)
List of additional container ports to expose for port mappings or links. If the port is already exposed using EXPOSE in a Dockerfile, you don't need to expose it again.
Then, the only other difference I see when considering the Ansible docker module is that the port are inside double-quotes:
ports:
- "8080:9000"
Also, if you want to prexypass to another container in the same docker daemon, you might want to use a link instead of a fixed IP address.
links:
- "myredis:aliasedredis"
That way, your nginx.conf includes a fixed rule:
proxy_pass http://aliasedredis:9000;
I have recently started migrating to Docker 1.9 and Docker-Compose 1.5's networking features to replace using links.
So far with links there were no problems with nginx connecting to my php5-fpm fastcgi server located in a different server in one group via docker-compose. Newly though when I run docker-compose --x-networking up my php-fpm, mongo and nginx containers boot up, however nginx quits straight away with [emerg] 1#1: host not found in upstream "waapi_php_1" in /etc/nginx/conf.d/default.conf:16
However, if I run the docker-compose command again while the php and mongo containers are running (nginx exited), nginx starts and works fine from then on.
This is my docker-compose.yml file:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
This is my default.conf for nginx:
server {
listen 80;
root /var/www/test;
error_log /dev/stdout debug;
access_log /dev/stdout;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
# Referencing the php service host (Docker)
fastcgi_pass waapi_php_1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# We must reference the document_root of the external server ourselves here.
fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
How can I get nginx to work with only a single docker-compose call?
This can be solved with the mentioned depends_on directive since it's implemented now (2016):
version: '2'
services:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
depends_on:
- mongo
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
Successfully tested with:
$ docker-compose version
docker-compose version 1.8.0, build f3628c7
Find more details in the documentation.
There is also a very interesting article dedicated to this topic: Controlling startup order in Compose
There is a possibility to use "volumes_from" as a workaround until depends_on feature (discussed below) is introduced. All you have to do is change your docker-compose file as below:
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
volumes_from:
- php
php:
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
mongo:
image: mongo
ports:
- "42017:27017"
volumes:
- /var/mongodata/wa-api:/data/db
command: --smallfiles
One big caveat in the above approach is that the volumes of php are exposed to nginx, which is not desired. But at the moment this is one docker specific workaround that could be used.
depends_on feature
This probably would be a futuristic answer. Because the functionality is not yet implemented in Docker (as of 1.9)
There is a proposal to introduce "depends_on" in the new networking feature introduced by Docker. But there is a long running debate about the same # https://github.com/docker/compose/issues/374 Hence, once it is implemented, the feature depends_on could be used to order the container start-up, but at the moment, you would have to resort to one of the following:
make nginx retry until the php server is up - I would prefer this one
use volums_from workaround as described above - I would avoid using this, because of the volume leakage into unnecessary containers.
If you are so lost for read the last comment. I have reached another solution.
The main problem is the way that you named the services names.
In this case, if in your docker-compose.yml, the service for php are called "api" or something like that, you must ensure that in the file nginx.conf the line that begins with fastcgi_pass have the same name as the php service. i.e fastcgi_pass api:9000;
Lets say the php service name is php_service then the code will be:
In the file docker-compose.yml
php_service:
build:
dockerfile: ./docker/php/Dockerfile
In the file nginx.conf
location ~ \.php$ {
fastcgi_pass php_service:9000;
fastcgi_param SCRIPT_FILENAME$document_root$fastcgi_script_name;
include fastcgi_params;
}
You can set the max_fails and fail_timeout directives of nginx to indicate that the nginx should retry the x number of connection requests to the container before failing on the upstream server unavailability.
You can tune these two numbers as per your infrastructure and speed at which the whole setup is coming up. You can read more details about the health checks section of the below URL:
http://nginx.org/en/docs/http/load_balancing.html
Following is the excerpt from http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server
max_fails=number
sets the number of unsuccessful attempts to communicate with the
server that should happen in the duration set by the fail_timeout
parameter to consider the server unavailable for a duration also set
by the fail_timeout parameter. By default, the number of unsuccessful
attempts is set to 1. The zero value disables the accounting of
attempts. What is considered an unsuccessful attempt is defined by the
proxy_next_upstream, fastcgi_next_upstream, uwsgi_next_upstream,
scgi_next_upstream, and memcached_next_upstream directives.
fail_timeout=time
sets the time during which the specified number of unsuccessful
attempts to communicate with the server should happen to consider the
server unavailable; and the period of time the server will be
considered unavailable. By default, the parameter is set to 10
seconds.
To be precise your modified nginx config file should be as follows (this script is assuming that all the containers are up by 25 seconds at least, if not, please change the fail_timeout or max_fails in below upstream section):
Note: I didn't test the script myself, so you could give it a try!
upstream phpupstream {
server waapi_php_1:9000 fail_timeout=5s max_fails=5;
}
server {
listen 80;
root /var/www/test;
error_log /dev/stdout debug;
access_log /dev/stdout;
location / {
# try to serve file directly, fallback to app.php
try_files $uri /index.php$is_args$args;
}
location ~ ^/.+\.php(/|$) {
# Referencing the php service host (Docker)
fastcgi_pass phpupstream;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
# We must reference the document_root of the external server ourselves here.
fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name;
fastcgi_param HTTPS off;
}
}
Also, as per the following Note from docker (https://github.com/docker/docker.github.io/blob/master/compose/networking.md#update-containers), it is evident that the retry logic for checking the health of the other containers is not docker's responsibility and rather the containers should do the health check themselves.
Updating containers
If you make a configuration change to a service and run docker-compose
up to update it, the old container will be removed and the new one
will join the network under a different IP address but the same name.
Running containers will be able to look up that name and connect to
the new address, but the old address will stop working.
If any containers have connections open to the old container, they
will be closed. It is a container's responsibility to detect this
condition, look up the name again and reconnect.
My problem was that I forgot to specify network alias in
docker-compose.yml in php-fpm
networks:
- u-online
It is works well!
version: "3"
services:
php-fpm:
image: php:7.2-fpm
container_name: php-fpm
volumes:
- ./src:/var/www/basic/public_html
ports:
- 9000:9000
networks:
- u-online
nginx:
image: nginx:1.19.2
container_name: nginx
depends_on:
- php-fpm
ports:
- "80:8080"
- "443:443"
volumes:
- ./docker/data/etc/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./docker/data/etc/nginx/nginx.conf:/etc/nginx/nginx.conf
- ./src:/var/www/basic/public_html
networks:
- u-online
#Docker Networks
networks:
u-online:
driver: bridge
I believe Nginx dont take in account Docker resolver (127.0.0.11), so please, can you try adding:
resolver 127.0.0.11
in your nginx configuration file?
I had the same problem because there was two networks defined in my docker-compose.yml: one backend and one frontend.
When I changed that to run containers on the same default network everything started working fine.
I found solution for services, that may be disabled for local development. Just use variables, that prevents emergency shutdown and works after service is available.
server {
location ^~ /api/ {
# other config entries omitted for breavity
set $upstream http://api.awesome.com:9000;
# nginx will now start if host is not reachable
fastcgi_pass $upstream;
fastcgi_index index.php;
}
}
source: https://sandro-keil.de/blog/let-nginx-start-if-upstream-host-is-unavailable-or-down/
Had the same problem and solved it. Please add the following line to docker-compose.yml nginx section:
links:
- php:waapi_php_1
Host in nginx config fastcgi_pass section should be linked inside docker-compose.yml nginx configuration.
At the first glance, I missed, that my "web" service didn't actually start, so that's why nginx couldn't find any host
web_1 | python3: can't open file '/var/www/app/app/app.py': [Errno 2] No such file or directory
web_1 exited with code 2
nginx_1 | [emerg] 1#1: host not found in upstream "web:4044" in /etc/nginx/conf.d/nginx.conf:2
Two things worth to mention:
Using same network bridge
Using links to add hosts resol
My example:
version: '3'
services:
mysql:
image: mysql:5.7
restart: always
container_name: mysql
volumes:
- ./mysql-data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: tima#123
network_mode: bridge
ghost:
image: ghost:2
restart: always
container_name: ghost
depends_on:
- mysql
links:
- mysql
environment:
database__client: mysql
database__connection__host: mysql
database__connection__user: root
database__connection__password: xxxxxxxxx
database__connection__database: ghost
url: https://www.itsfun.tk
volumes:
- ./ghost-data:/var/lib/ghost/content
network_mode: bridge
nginx:
image: nginx
restart: always
container_name: nginx
depends_on:
- ghost
links:
- ghost
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/conf.d:/etc/nginx/conf.d
- ./nginx/letsencrypt:/etc/letsencrypt
network_mode: bridge
If you don't specify a special network bridge, all of them will use the same default one.
Add the links section to your nginx container configuration.
You have to make visible the php container to the nginx container.
nginx:
image: nginx
ports:
- "42080:80"
volumes:
- ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
links:
- php:waapi_php_1
With links there is an order of container startup being enforced. Without links the containers can start in any order (or really all at once).
I think the old setup could have hit the same issue, if the waapi_php_1 container was slow to startup.
I think to get it working, you could create an nginx entrypoint script that polls and waits for the php container to be started and ready.
I'm not sure if nginx has any way to retry the connection to the upstream automatically, but if it does, that would be a better option.
You have to use something like docker-gen to dynamically update nginx configuration when your backend is up.
See:
https://hub.docker.com/r/jwilder/docker-gen/
https://github.com/jwilder/nginx-proxy
I believe Nginx+ (premium version) contains a resolve parameter too (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream)
Perhaps the best choice to avoid linking containers issues are the docker networking features
But to make this work, docker creates entries in the /etc/hosts for each container from assigned names to each container.
with docker-compose --x-networking -up is something like
[docker_compose_folder]-[service]-[incremental_number]
To not depend on unexpected changes in these names you should use the parameter
container_name
in your docker-compose.yml as follows:
php:
container_name: waapi_php_1
build: config/docker/php
ports:
- "42022:22"
volumes:
- .:/var/www/html
env_file: config/docker/php/.env.development
Making sure that it is the same name assigned in your configuration file for this service. I'm pretty sure there are better ways to do this, but it is a good approach to start.
My Workaround (after much trial and error):
In order to get around this issue, I had to get the full name of the 'upstream' Docker container, found by running docker network inspect my-special-docker-network and getting the full name property of the upstream container as such:
"Containers": {
"39ad8199184f34585b556d7480dd47de965bc7b38ac03fc0746992f39afac338": {
"Name": "my_upstream_container_name_1_2478f2b3aca0",
Then used this in the NGINX my-network.local.conf file in the location block of the proxy_pass property: (Note the addition of the GUID to the container name):
location / {
proxy_pass http://my_upsteam_container_name_1_2478f2b3aca0:3000;
As opposed to the previously working, but now broken:
location / {
proxy_pass http://my_upstream_container_name_1:3000
Most likely cause is a recent change to Docker Compose, in their default naming scheme for containers, as listed here.
This seems to be happening for me and my team at work, with latest versions of the Docker nginx image:
I've opened issues with them on the docker/compose GitHub here
this error appeared to me because my php-fpm image enabled cron, and I have no idea why
In my case it was nginx: [emerg] host not found in upstream as well, so I managed to solve it, by adding depends_on directive to nginx service in docker-compose.yml file.
(new to nginx)
In my case it was wrong folder name
For config
upstream serv {
server ex2_app_1:3000;
}
make sure the app folder is in ex2 folder:
ex2/app/...
I’d like to make a fully dockerized Drupal install. My first step is to get containers running with Nginx and php5-fpm, both Debian based. I’m on CoreOS alpha channel (using Digital Ocean.)
My Dockerfiles are the following:
Nginx:
FROM debian
MAINTAINER fvhemert
RUN apt-get update && apt-get install -y nginx && echo "\ndaemon off;" >> /etc/nginx/nginx.conf
CMD ["nginx"]
EXPOSE 80
This container build and runs nicely. I see the default Nginx page on my server ip.
Php5-fpm:
FROM debian
MAINTAINER fvhemert
RUN apt-get update && apt-get install -y \
php5-fpm \
&& sed 's/;daemonize = yes/daemonize = no/' -i /etc/php5/fpm/php-fpm.conf
CMD ["php5-fpm"]
EXPOSE 9000
This container also builds with no problems and it keeps running when started.
I start the php5-fpm container first with:
docker run -d --name php5-fpm freek/php5-fpm:1
Ad then I start Nginx,, linked to php5-fpm:
docker run -d -p 80:80 --link php5-fpm:phpserver --name nginx freek/nginx-php:1
The linking seems to work, there is an entry in /etc/hosts with name phpserver. Both dockers run:
core#dockertest ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd1a9ae0f1dd freek/nginx-php:4 "nginx" 38 minutes ago Up 38 minutes 0.0.0.0:80->80/tcp nginx
3bd12b3761b9 freek/php5-fpm:2 "php5-fpm" 38 minutes ago Up 38 minutes 9000/tcp php5-fpm
I have adjusted some of the config files. For the Nginx container I edited /etc/nginx/sites-enabled/default and changed:
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default_server ipv6only=on; ## listen for ipv6
root /usr/share/nginx/www;
index index.html index.htm index.php;
(I added the index.php)
And further on:
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
#
# # With php5-cgi alone:
fastcgi_pass phpserver:9000;
# # With php5-fpm:
# fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
In the php5-fpm docker I changed /etc/php5/fpm/php.ini:
cgi.fix_pathinfo=0
php5-fpm runs:
[21-Nov-2014 06:15:29] NOTICE: fpm is running, pid 1
[21-Nov-2014 06:15:29] NOTICE: ready to handle connections
I also changed index.html to index.php, it looks like this (/usr/share/nginx/www/index.php):
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body bgcolor="white" text="black">
<center><h1>Welcome to nginx!</h1></center>
<?php
phpinfo();
?>
</body>
</html>
I have scanned the 9000 port from the Nginx docker, it appears as closed. Not a good sign of course:
root#fd1a9ae0f1dd:/# nmap -p 9000 phpserver
Starting Nmap 6.00 ( http://nmap.org ) at 2014-11-21 06:49 UTC
Nmap scan report for phpserver (172.17.0.94)
Host is up (0.00022s latency).
PORT STATE SERVICE
9000/tcp closed cslistener
MAC Address: 02:42:AC:11:00:5E (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds
The Nginx logs:
root#fd1a9ae0f1dd:/# vim /var/log/nginx/error.log
2014/11/20 14:43:46 [error] 13#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 194.171.252.110, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "128.199.60.95"
2014/11/21 06:15:51 [error] 9#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 145.15.244.119, server: localhost, request: "GET / HTTP/1.0", upstream: "fastcgi://172.17.0.94:9000", host: "128.199.60.95"
Yes, that goes wrong and I keep getting a 502 bad gateway error when browsing to my Nginx instance.
My question is: What exactly goes wrong? My guess is that I’m missing some setting in the php config files.
EDIT FOR MORE DETAILS:
This is the result (from inside the php5-fpm container, after apt-get install net-tools):
root#3bd12b3761b9:/# netstat -tapen
Active Internet connections
(servers and established) Proto Recv-Q Send-Q Local Address
Foreign Address State User Inode PID/Program name
From inside the Nginx container:
root#fd1a9ae0f1dd:/# netstat -tapen
Active Internet connections
(servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program
name tcp 0 0 0.0.0.0:80 0.0.0.0:*
LISTEN 0 1875387 -
EDIT2:
Progression!
In the php5-fpm container, in the file:
/etc/php5/fpm/pool.d/www.conf
I changed the listen argument from some socket name to:
listen = 9000
Now when I go to my webpage I get the error:
"No input file specified."
Probably I have trailing / wrong somewhere. I'll look into it more closely!
EDIT3:
So I have rebuild the dockers with the above mentioned alterations and it seems that they are talking. However, my webpage tells me: "file not found."
I'm very sure it has to do with the document that nginx sents to php-fpm but I have no idea how it should look like. I used the defaults when using the socket method which always worked. Now it doesn't work anymore. What should be in /etc/nginx/sites-enabled/default under location ~ .php$ { ?
The reason it doesn't work is, as you have discovered yourself, that nginx only sends the path of the PHP file to PHP-FPM, not the file itself (which would be quite inefficient). The solution is to use a third, data-only VOLUME container to host the files, and then mount it on both docker instances.
FROM debian
VOLUME /var/www
CMD ['true']
Build the above Dockerfile and create an instance (call it for example: storage-www), then run both the nginx and the PHP-FPM containers with the option:
--volumes-from storage-www
That will work if you run both containers on the same physical server.
But you still could use different servers, if you put that data-only container on a networked file-system, such as GlusterFS, which is quite efficient and can be distributed over a large-scale network.
Hope that helps.
Update:
As of 2015, the best way to make persistent links between containers is to use docker-compose.
So, I have tested all settings and none worked between dockers while they did work with the same settings on 1 server (or also in one docker probably). Then I found out that php-fpm is not taking php files from nginx, it is receiving the path, if it can't find the same file in its own container it generates a "file not found". See here for more information: https://code.google.com/p/sna/wiki/NginxWithPHPFPM So that solves the question but not the problem, sadly. This is quite annoying for people that want to do load balancing with multiple php-fpm servers, they'd have to rsync everything or something like that. I hope someday I'll find a better solution. Thanx for the replies.
EDIT: Perhaps I can mount the same volume in both containers and get it to work that way. That won't be a solution when using multiple servers though.
When you are in your container as
root#fd1a9ae0f1dd:/#
, check the ports used with
netstat -tapen | grep ":9000 "
or
netstat -lntpu | grep ":9000 "
or the same commands without the grep