I’d like to make a fully dockerized Drupal install. My first step is to get containers running with Nginx and php5-fpm, both Debian based. I’m on CoreOS alpha channel (using Digital Ocean.)
My Dockerfiles are the following:
Nginx:
FROM debian
MAINTAINER fvhemert
RUN apt-get update && apt-get install -y nginx && echo "\ndaemon off;" >> /etc/nginx/nginx.conf
CMD ["nginx"]
EXPOSE 80
This container build and runs nicely. I see the default Nginx page on my server ip.
Php5-fpm:
FROM debian
MAINTAINER fvhemert
RUN apt-get update && apt-get install -y \
php5-fpm \
&& sed 's/;daemonize = yes/daemonize = no/' -i /etc/php5/fpm/php-fpm.conf
CMD ["php5-fpm"]
EXPOSE 9000
This container also builds with no problems and it keeps running when started.
I start the php5-fpm container first with:
docker run -d --name php5-fpm freek/php5-fpm:1
Ad then I start Nginx,, linked to php5-fpm:
docker run -d -p 80:80 --link php5-fpm:phpserver --name nginx freek/nginx-php:1
The linking seems to work, there is an entry in /etc/hosts with name phpserver. Both dockers run:
core#dockertest ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd1a9ae0f1dd freek/nginx-php:4 "nginx" 38 minutes ago Up 38 minutes 0.0.0.0:80->80/tcp nginx
3bd12b3761b9 freek/php5-fpm:2 "php5-fpm" 38 minutes ago Up 38 minutes 9000/tcp php5-fpm
I have adjusted some of the config files. For the Nginx container I edited /etc/nginx/sites-enabled/default and changed:
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default_server ipv6only=on; ## listen for ipv6
root /usr/share/nginx/www;
index index.html index.htm index.php;
(I added the index.php)
And further on:
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
#
# # With php5-cgi alone:
fastcgi_pass phpserver:9000;
# # With php5-fpm:
# fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
}
In the php5-fpm docker I changed /etc/php5/fpm/php.ini:
cgi.fix_pathinfo=0
php5-fpm runs:
[21-Nov-2014 06:15:29] NOTICE: fpm is running, pid 1
[21-Nov-2014 06:15:29] NOTICE: ready to handle connections
I also changed index.html to index.php, it looks like this (/usr/share/nginx/www/index.php):
<html>
<head>
<title>Welcome to nginx!</title>
</head>
<body bgcolor="white" text="black">
<center><h1>Welcome to nginx!</h1></center>
<?php
phpinfo();
?>
</body>
</html>
I have scanned the 9000 port from the Nginx docker, it appears as closed. Not a good sign of course:
root#fd1a9ae0f1dd:/# nmap -p 9000 phpserver
Starting Nmap 6.00 ( http://nmap.org ) at 2014-11-21 06:49 UTC
Nmap scan report for phpserver (172.17.0.94)
Host is up (0.00022s latency).
PORT STATE SERVICE
9000/tcp closed cslistener
MAC Address: 02:42:AC:11:00:5E (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds
The Nginx logs:
root#fd1a9ae0f1dd:/# vim /var/log/nginx/error.log
2014/11/20 14:43:46 [error] 13#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 194.171.252.110, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "128.199.60.95"
2014/11/21 06:15:51 [error] 9#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 145.15.244.119, server: localhost, request: "GET / HTTP/1.0", upstream: "fastcgi://172.17.0.94:9000", host: "128.199.60.95"
Yes, that goes wrong and I keep getting a 502 bad gateway error when browsing to my Nginx instance.
My question is: What exactly goes wrong? My guess is that I’m missing some setting in the php config files.
EDIT FOR MORE DETAILS:
This is the result (from inside the php5-fpm container, after apt-get install net-tools):
root#3bd12b3761b9:/# netstat -tapen
Active Internet connections
(servers and established) Proto Recv-Q Send-Q Local Address
Foreign Address State User Inode PID/Program name
From inside the Nginx container:
root#fd1a9ae0f1dd:/# netstat -tapen
Active Internet connections
(servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program
name tcp 0 0 0.0.0.0:80 0.0.0.0:*
LISTEN 0 1875387 -
EDIT2:
Progression!
In the php5-fpm container, in the file:
/etc/php5/fpm/pool.d/www.conf
I changed the listen argument from some socket name to:
listen = 9000
Now when I go to my webpage I get the error:
"No input file specified."
Probably I have trailing / wrong somewhere. I'll look into it more closely!
EDIT3:
So I have rebuild the dockers with the above mentioned alterations and it seems that they are talking. However, my webpage tells me: "file not found."
I'm very sure it has to do with the document that nginx sents to php-fpm but I have no idea how it should look like. I used the defaults when using the socket method which always worked. Now it doesn't work anymore. What should be in /etc/nginx/sites-enabled/default under location ~ .php$ { ?
The reason it doesn't work is, as you have discovered yourself, that nginx only sends the path of the PHP file to PHP-FPM, not the file itself (which would be quite inefficient). The solution is to use a third, data-only VOLUME container to host the files, and then mount it on both docker instances.
FROM debian
VOLUME /var/www
CMD ['true']
Build the above Dockerfile and create an instance (call it for example: storage-www), then run both the nginx and the PHP-FPM containers with the option:
--volumes-from storage-www
That will work if you run both containers on the same physical server.
But you still could use different servers, if you put that data-only container on a networked file-system, such as GlusterFS, which is quite efficient and can be distributed over a large-scale network.
Hope that helps.
Update:
As of 2015, the best way to make persistent links between containers is to use docker-compose.
So, I have tested all settings and none worked between dockers while they did work with the same settings on 1 server (or also in one docker probably). Then I found out that php-fpm is not taking php files from nginx, it is receiving the path, if it can't find the same file in its own container it generates a "file not found". See here for more information: https://code.google.com/p/sna/wiki/NginxWithPHPFPM So that solves the question but not the problem, sadly. This is quite annoying for people that want to do load balancing with multiple php-fpm servers, they'd have to rsync everything or something like that. I hope someday I'll find a better solution. Thanx for the replies.
EDIT: Perhaps I can mount the same volume in both containers and get it to work that way. That won't be a solution when using multiple servers though.
When you are in your container as
root#fd1a9ae0f1dd:/#
, check the ports used with
netstat -tapen | grep ":9000 "
or
netstat -lntpu | grep ":9000 "
or the same commands without the grep
Related
I have a VPS running Ubuntu + Nginx. There's an old website I'm no longer using, so I followed these steps (based on these instructions, and these to remove the SSL.
cd /etc/nginx/sites-enabled
sudo rm oldwebsite.com
cd ../sites-available
sudo rm oldwebsite.com
Next, I figured I could also delete the relevant files in /var/www/
cd /var/www/
sudo rm -r oldwebsite.com
Now when I try to access www.oldwebsite.com, I still get the same website, just without HTTPS anymore. I've checked /etc/nginx/sites-available/default for any remaining references to that website, but as far as I know, I've erased all traces of its existence from my server.
Was this the incorrect way to delete an old website?
If it helps, my old website was set up to use a reverse proxy to direct to my Express app. It was set up as a server block according to this guide.
So first of all if you dont want to make it accessable anymore delete the Host A record on your DNS. With this the DNS query will not point to any severs IP address.
Based on your comment: If its showing the APACHE defaults page your DNS points to an IP address of an webserver running httpd. So let me draft a couple of steps for you how I would do it (as somebody how moved 10K of sites from and to NGINX).
1. DNS is key
Check the current DNS Setting for your domain. Do a quick lookup using tools like host or dig.
$# host nginx.org
nginx.org has address 52.58.199.22
So great now we know the public IPv4 of our WebServer (we are not talking about LoadBalancers or anything else in between. We asume the webserver is directly conncted to the internet.)
2. WebServer configuration
On your server make sure nginx is installed and listening on port 80 for example.
$# netstat -tulpn | grep "LISTEN"
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 25674/nginx: master
Great. We have NGINX listen on Port 80. Let make sure we can send reuqest.
$# curl -v http://YOURDOMAIN
* About to connect() to localhost port 80 (#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.19.5
< Date: Sun, 14 Feb 2021 08:47:56 GMT
< Content-Type: text/plain
< Content-Length: 10
< Connection: keep-alive
<
localhost
* Connection #0 to host localhost left intact
So if you got an response that means your NGINX is up and running, listening on port 80 and there is no firewall (ufw, firewalld, iptables, security-groups...) blocking your from reaching out to the server.
NOTICE: Make sure your firewall setup is done right. Let me know if you need more information on that depending on your systems architecture.
NGINX Configuration
Lets say your website should just print out a String saying "We will be here shortly!"
Based on your OS the configuration directory for custom nginx files can be different. Check the default /etc/nginx/nginx.conf file and see the include path in the http context. This should be something like: include conf.d/*.conf or sites-enabled/*.conf. Create a conf-file in one of that directories.
server {
listen 80;
server_name YOURDOMAIN.com
location {
add_hedaer "Content-Type: text/html";
return 200 "We will be here shortly!\n";
}
}
With this simple setup you can have a webserver up and running but not showing anyhing special. If you want to create a little html file feel free to do so and use root and index to configure it in your nginx configuration.
After deleting enter command sudo systemctl restart nginx
Dear K8S community Team,
I am getting this error message from nginx when I deploy my application pod. My application an angular6 app is hosted inside an nginx server, which is deployed as a docker container inside EKS.
I have my application configured as a “read-only container filesystem”, but I am using “ephemeral mounted” volume of type “emptyDir” in combination with a read-only filesystem.
So I am not sure the reason of this following error:
2019/04/02 14:11:29 [emerg] 1#1: mkdir()
"/var/cache/nginx/client_temp" failed (30: Read-only file system)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (30:
Read-only file system)
My deployment.yaml is:
...
spec:
volumes:
- name: tmp-volume
emptyDir: {}
# Pod Security Context
securityContext:
fsGroup: 2000
containers:
- name: {{ .Chart.Name }}
volumeMounts:
- mountPath: /tmp
name: tmp-volume
image: "{{ .Values.image.name }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
securityContext:
readOnlyRootFilesystem: true
ports:
- name: http
containerPort: 80
protocol: TCP
...
nginx.conf is:
...
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Turn off the bloody buffering to temp files
proxy_buffering off;
sendfile off;
keepalive_timeout 120;
server_names_hash_bucket_size 128;
# These two should be the same or nginx will start writing
# large request bodies to temp files
client_body_buffer_size 10m;
client_max_body_size 10m;
...
Seems like your nginx is not running as root user.
Since release 1.12.1-r2, nginx daemon is being run as user 1001.
1.12.1-r2
The nginx container has been migrated to a non-root container approach. Previously the container run as root user and the nginx daemon was started as nginx user. From now own, both the container and the nginx daemon run as user 1001. As a consequence, the configuration files are writable by the user running the nginx process.
This is why you are unable to bind on port 80, it's necessary to use port > 1000.
You should use:
ports:
- '80:8080'
- '443:8443'
and edit the nginx.conf so it listens on port 8080:
server {
listen 0.0.0.0:8080;
...
Or run nginx as root:
command: [ "/bin/bash", "-c", "sudo nginx -g 'daemon off;'" ]
As already stated by Crou, the nginx image maintainers switched to a non-root-user-approach.
This has two implications:
Your nginx process might not be able to bind all network sockets.
Your nginx process might not be able to read all file system locations.
You can try to change the ports as described by Crou (nginx.conf and deployment.yaml). Even with the NET_BIND_SERVICE capability added to the container, this does not neccessarily mean that the nginx process gets this capability. You can try to add the capability with
$ sudo setcap 'cap_net_bind+p' $(which nginx)
as a RUN instruction in your Dockerfile.
However it is usually simpler to just change the listening port.
For the filesystem, please note that /var/cache/nginx/ is not mounted as a volume and thus belongs to the RootFS which is mounted as read only. The simplest way to solve this, is to add a second epheremal emptyDir for /var/cache/nginx/ in the volumes section. Please make sure, that the nginx user has the file system permissions to read and write this directory. This is usually already taken care of by the docker image maintainers as long as you stay with the default locations.
I recommend you to not switch back to running nginx as root as this might expose you to security vulnerabilities.
I've set up uWSGI and NGINX locally and my configurations have no issue serving web requests. However, when I containerize uWSGI and NGINX (in separate Docker containers) I can't seem to make a connection. My NGINX server configuration looks like this:
server {
listen 80;
server_name localhost;
location / {
include uwsgi_params;
uwsgi_pass uwsgi.app:9090;
}
}
My uWSGI ini file looks like this:
[uwsgi]
socket = localhost:9090
module = uwsgi:app
processes = 4
threads = 2
master = true
buffer-size = 32768
stats = localhost:9191
die-on-term = true
vaccuum = true
I run the containers both in the same user-generated bridge network using the following for uWSGI docker run -p 9090:9090 --network main --name uwsgi.app -d uwsgirepo; and docker run -p 80:80 --network main --link uwsgi.app --name nginx.app -d nginxrepo for NGINX.
When I try to make a request on my local machine I get the following error message from the NGINX logs: '[error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: localhost, request: "GET /endpoint HTTP/1.1", upstream: "uwsgi://172.18.0.2:9090", host: "127.0.0.1"'
It doesn't look like it's ever connecting to uWSGI. Not sure where to go from here. Any thoughts?
Configure uwsgi to listen on 0.0.0.0 instead of localhost. That'll make it listen on all the containers interfaces. You are getting a connection refused because it's only listening on the uwsgi container's localhost, and not the container's eth0.
I'm having trouble with consistent service discovery using EC2, AWS, Docker, Consul-Template, Consul, and NGINX.
I have multiple services, each running on it's own EC2 instance. On these instances I run the following containers (in this order):
cAdvisor (monitoring)
node-exporter (monitoring)
Consul (running in agent mode)
Registrator
My service
Custom container running both nginx and consul-template
The custom container has the following Dockerfile:
FROM nginx:1.9
#Install Curl
RUN apt-get update -qq && apt-get -y install curl
#Install Consul Template
RUN curl -L https://github.com/hashicorp/consul-template/releases/download/v0.10.0/consul-template_0.10.0_linux_amd64.tar.gz | tar -C /usr/local/bin --strip-components 1 -zxf -
#Setup Consul Template Files
RUN mkdir /etc/consul-templates
COPY ./app.conf.tmpl /etc/consul-templates/app.conf
# Remove all other conf files from nginx
RUN rm /etc/nginx/conf.d/*
#Default Variables
ENV CONSUL consul:8500
CMD /usr/sbin/nginx -c /etc/nginx/nginx.conf && consul-template -consul=$CONSUL -template "/etc/consul-templates/app.conf:/etc/nginx/conf.d/app.conf:/usr/sbin/nginx -s reload"
The app.conf file looks like this:
{{range services}}
upstream {{.Name}} {
least_conn;{{range service .Name}}
server {{.Address}}:{{.Port}};{{end}}
}
{{end}}
server {
listen 80 default_server;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location / {
proxy_pass http://cart/cart/;
}
location /cart {
proxy_pass http://cart/cart;
}
{{range services}}
location /api/{{.Name}} {
proxy_read_timeout 180;
proxy_pass http://{{.Name}}/{{.Name}};
}
{{end}}
}
Everything seems to start up perfectly ok, but at some point (which I'm yet to identify) after start up, consul-template seems to return that there are no available servers for a particular service. This means that the upstream section for that service contains no servers, and I end up with this in the logs:
2015/12/04 07:09:34 [emerg] 77#77: no servers are inside upstream in /etc/nginx/conf.d/app.conf:336
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/app.conf:336
2015/12/04 07:09:34 [ERR] (runner) error running command: exit status 1
Consul Template returned errors:
1 error(s) occurred:
* exit status 1
2015/12/04 07:09:34 [DEBUG] (logging) setting up logging
2015/12/04 07:09:34 [DEBUG] (logging) config:
{
"name": "consul-template",
"level": "WARN",
"syslog": false,
"syslog_facility": "LOCAL0"
}
2015/12/04 07:09:34 [emerg] 7#7: no servers are inside upstream in /etc/nginx/conf.d/app.conf:336
nginx: [emerg] no servers are inside upstream in /etc/nginx/conf.d/app.conf:336
After this, NGINX will no longer accept requests.
I'm sure I'm missing something obvious, but I've tied myself in mental knots about the sequence of events etc. What I think might be happening is that NGINX crashes, but because consul-template is still running, the Docker container doesn't restart. I don't actually care if the container itself restarts, or if just NGINX restarts.
Can someone help?
Consul Template will exit once the script it runs after writing returns a non-zero exit code. See here for the documentation.
The documentation suggests to put a || true just after the restart (or reload) command. This will keep Consul Template running independent of the exit code.
You could consider wrapping the restart in its own shell script that first tests the configuration (with nginx -t) before triggering a reload. You could even move the initial start of nginx to this script as it only makes sense to start nginx once the first (valid) configuration has been written?!
I want to deploy my flask service in a server with centOS 7. So I followed this tutorial - https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-centos-7 .
After runnning systemctl start nginx command, I got this error:
nginx: [emerg] bind() to 0.0.0.0:5000 failed (13: Permission denied)
My nginx.conf file:
server {
listen 5000;
server_name _;
location / {
include uwsgi_params;
uwsgi_pass unix:/root/fiproxy/fiproxyproject/fiproxy.sock;
}
}
Note: flask service and wsgi work ok. And I've tried to run nginx with superuser and the error remains.
After search a lot in Internet, I found a solution to my problem.
I ran this command to get all used ports in my machine: semanage port -l.
After that, I filtered the output with: semanage port -l | grep 5000.
I realized that this port 5000 is used by commplex_main_port_t, I searched in speedguide and I found: 5000 tcp,udp **UPnP**.
Conclusion, maybe my problem was bind a standard port.
To add your desired port use this command:
sudo semanage port -a -t http_port_t -p tcp [yourport]
Now run nginx with sudo:
sudo systemctl stop nginx
sudo systemctl start nginx
The Nginx master process needs root permission. Because it needs bind port.
You need start Nginx under root user.
Then you can define the user of child processes in nginx.conf.