php-fpm error "no input file specified" with Docker - nginx

I am trying to setup a docker container for php-fpm. But encountering this error when visiting the web directory configured on localhost. I have been stuck here for over 5 hours.
Here is my Dockerfile:
FROM centos:latest
WORKDIR /tmp
RUN yum -y update
RUN rpm -Uvh https://mirror.webtatic.com/yum/el7/epel-release.rpm; rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
#RUN yum -y groupinstall "Development Tools"
RUN systemctl stop firewalld; systemctl disable firewalld
RUN yum -y install php56w php56w-opcache php56w-cli php56w-common php56w-devel php56w-fpm php56w-gd php56w-mbstring php56w-mcrypt php56w-pdo php56w-mysqlnd php56w-pecl-xdebug php56w-pecl-memcache
RUN sed -i "s/;date.timezone =.*/date.timezone = UTC/" /etc/php.ini && \
sed -i "s/display_errors = Off/display_errors = stderr/" /etc/php.ini && \
sed -i "s/upload_max_filesize = 2M/upload_max_filesize = 30M/" /etc/php.ini && \
sed -i -e "s/;daemonize\s*=\s*yes/daemonize = no/g" /etc/php-fpm.conf && \
sed -i '/^listen = /c listen = 9000' /etc/php-fpm.d/www.conf && \
sed -i '/^listen.allowed_clients/c ;listen.allowed_clients =' /etc/php-fpm.d/www.conf
RUN mkdir -p /home/www
VOLUME ["/home/www"]
EXPOSE 9000
ENTRYPOINT ["/usr/sbin/php-fpm", "-F"]
Check by docker ps
aab4f8ce0fe8 jason/fpm:v1 "/usr/sbin/php-fpm - 6 minutes ago Up 6 minutes 0.0.0.0:9002->9000/tcp fpm
The data volume does exist. check by docker inspect
"Volumes": {
"/home/www": "/home/www"
},
"VolumesRW": {
"/home/www": true
}
"Ports": {
"9000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "9002"
}
]
}
localhost nginx website config:
listen 80;
server_name admin.local.lumen.com;
index index.php index.html index.htm ;
root /home/www/lumenback/public_admin;
error_log /home/wwwlogs/lumenback_error.log;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ .*\.php?$
{
fastcgi_pass 127.0.0.1:9002;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
#include fastcgi.conf;
}
Error logged by php-fpm:
[error] 5322#0: *3798 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 172.16.1.19, server: admin.local.lumen.com, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9002", host: "admin.local.lumen.com"
Many people online said the error is caused by fastcgi_param SCRIPT_FILENAME. Seems it is not the reason in my case.

You may have got an incorrect file permission -- the process in the container cannot read php files. You can either change the php files readable by the user running php-fpm in the container. Note that the user name in the container differs from the names on the host, so specifying user ID is probably a better way (or simply make them world readable).

Related

nginx and uwsgi config issues

when using uwsgi with the uwsgi-file parameter, I can see the correctly rendered page at 127.0.0.1/hello.py:
plugin = python3
master = true
processes = 5
chdir = /var/www/web1/
http = 127.0.0.1:9090
wsgi-file = /var/www/web1/hello.py
stats = 127.0.0.1:9191
chmod-socket = 777
vacuum = true
enable-threads = true
die-on-term = true
but when i reverse proxy from nginx using
location ~ \.py$ {
try_files $uri /index.py =404;
proxy_pass http://127.0.0.1:9090;
include uwsgi_params;
}
and disable the uwsgi-file parameter:
plugin = python3
master = true
processes = 5
chdir = /var/www/web1/
http = 127.0.0.1:9090
#wsgi-file = /var/www/web1/hello.py
stats = 127.0.0.1:9191
chmod-socket = 777
vacuum = true
enable-threads = true
die-on-term = true
then I get these errors:
browser - "Internal Server Error"
nginx console - "GET /hello.py HTTP/1.1" 500 32
uwsgi console - "GET /hello.py => generated 21 bytes in 0 msecs (HTTP/1.0 500) 2 headers in 83 bytes (0 switches on core 0)"
can I have some help troubleshooting this please
Solution requires uWSGI run CGI scripts:
enable the CGI plugin and configure uwsgi ini file as per
Running CGI scripts on uWSGI
configure nginx reverse proxy with the uwsgi_modifier1 9 parameter
location ~ \.py$ {
try_files $uri /index.py =404;
uwsgi_modifier1 9;
proxy_pass http://127.0.0.1:9090;
include uwsgi_params;
}
and this type of 'hello world' test:
#!/usr/bin/env python
print "Content-type: text/html\n\n"
print "<h1>Hello World</h1>"

How to config number of connections on nginx?

limit_conn_zone $binary_remote_addr zone=perip:10m;
server {
listen 80;
server_name wx110.cn;
root /data/www/wx120;
limit_conn perip 10;
location / {
index index.html ;
}
}
ab -n 100 -c 100 http://wx110.cn/test.html
result:Complete requests: 100
Failed requests: 0
Everything seems fine with your config. I guess there is issue with testing process. Try to execute this code: for i in {0..2000}; do (curl -Is http://wx110.cn/ | head -n1 &) 2>/dev/null; done.
And I recommend you enable access logs in your config file: access_log /var/log/nginx/wx110.cn.log;. Then you will be able monitor log file while testing: tail -f /var/log/nginx/wx110.cn.log

How can I get Goaccess running under Nginx?

I need to configure Nginx to make Goaccess work.
My environment is:
Ubuntu 18.04 LTS
Nginx 1.17.1 [ self-configure , path=/root ]
Lets Encrypt
sshfs
goaccess [ --enable-utf8 --enable-geoip=legacy --with-openssl ]
Since this is a self-answered Q/A I'm not including my failed attempts, but instead posting my solution. Feel free to edit it or post another answer improving the current code.
This solution/configuration solved my 400 and 502 errors
nginx conf [ key point ]
upstream goaccess {
server localhost:7890;
keepalive 60;
}
server {
listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/test.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/test.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
server_name rp.test.com;
root /var/www/goaccess;
index index.html;
location / {
try_files $uri $uri/ #goaccess;
alias /var/www/goaccess/;
index index.html;
proxy_set_header Upgrade $http_upgrade;
proxy_buffering on;
proxy_set_header Connection "Upgrade";
proxy_connect_timeout 600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
proxy_temp_path /var/nginx/proxy_temp;
}
location #goaccess{
proxy_pass http://goaccess;
}
}
goaccess [ goaccess.sh ]
#!/bin/sh
goaccess /root/test2_nginx_log/www.test2.com_access.log -o /var/www/goaccess/index.html --real-time-html --origin=https://rp.test.com --addr=0.0.0.0 --port=7890 --ws-url=wss://rp.test.com --ssl-cert=/etc/letsencrypt/live/test.com/fullchain.pem --ssl-key=/etc/letsencrypt/live/test.com/privkey.pem --time-format=%H:%M:%S --date-format=%d/%b/%Y --log-format=COMBINED
sshfs [ run_sshfs.sh ]
#!/bin/sh
if [ $(mount | grep 'root#111.111.111.xxx:/data/log/server/nginx' | wc -l) -ne 1 ]
then
echo 'yourpassword' | sshfs root#111.111.111.xxx:/data/log/server/nginx /root/test2_nginx_log -o password_stdin,allow_other
else
exit 0
fi
Finally this sh for check goaccess was running or not and the mount was online or not then restart that.
check and restart [ cr.sh ]
#!/bin/bash
if pgrep -x "goaccess" > /dev/null
then
clear ;
echo "goaccess is running!"
sleep 1
echo "now cutting down goaccess...please wait"
sleep 2
kill `pgrep goaccess`
echo "cut down done"
sleep 1
echo "now check mount test2 nginx log folder..."
cd /root/ && ./run_sshfs.sh
sleep 2
echo "mount done"
sleep 1
echo "now restart goaccess..."
cd /root/ && sudo nohup ./goaccess.sh > goaccess.log 2>&1 &
sleep 2
echo "goaccess was restarting success!"
sleep 1
echo "now all done!"
exit 1
else
clear ;
echo "goaccess is down!"
sleep 1
echo "now check mount test2 nginx log folder..."
cd /root/ && ./run_sshfs.sh
sleep 2
echo "mount done"
sleep 1
echo "now start goaccess..."
cd /root/ && sudo nohup ./goaccess.sh > goaccess.log 2>&1 &
sleep 2
echo "goaccess was starting success!"
sleep 1
echo "now all done!"
fi
That all. Now open your url which looks like my 'rp.test.com' you should see something similar below (if without any other special condition).
Running:
Notice: This is CROTEL's solution initially posted in question, subsequently moved by community members to a community wiki answer to conform to Stack Overflow's Q/A format

Not able to replace NGINX with NGINX Plus as reverse proxy for microservices on Google Cloud using Kubernetes

I am following this nice tutorial of How to create a scalable API with micro-services on Google Cloud using Kubernetes.
I have created 4 micro-services and to expose the services I am using NGINX Plus.
Note : The purpose of NGINX Plus / NGINX here is to work as reverse proxy.
Below is the directory Structure :
-nginx
--Dockerfile
--deployment.yaml
--index.html
--nginx-repo.crt
--nginx-repo.key
--nginx.conf
--svc.yaml
Details of files can be seen here. I am pasting the Docker file & nginx.conf here :
Dockerfile (Original with NGINX Plus):
FROM debian:8.3
RUN apt-get update && apt-get -y install wget
RUN mkdir -p /etc/ssl/nginx && wget -P /etc/ssl/nginx https://cs.nginx.com/static/files/CA.crt
COPY nginx-repo.key /etc/ssl/nginx/nginx-repo.key
COPY nginx-repo.crt /etc/ssl/nginx/nginx-repo.crt
RUN wget http://nginx.org/keys/nginx_signing.key && apt-key add nginx_signing.key
RUN apt-get -y install apt-transport-https libcurl3-gnutls lsb-release
RUN printf "deb https://plus-pkgs.nginx.com/debian `lsb_release -cs` nginx-plus\n" | tee /etc/apt/sources.list.d/nginx-plus.list
RUN wget -P /etc/apt/apt.conf.d https://cs.nginx.com/static/files/90nginx
RUN apt-get update && apt-get -y install nginx-plus
RUN mkdir /data
COPY index.html /data/index.html
COPY nginx.conf /etc/nginx/conf.d/backend.conf
RUN rm /etc/nginx/conf.d/default.conf
CMD ["nginx", "-g", "daemon off;"]
nginx.conf (Original with NGINX Plus):
resolver 10.11.240.10 valid=5s;
upstream reverse-backend {
zone reverse-backend 64k;
server reverse.default.svc.cluster.local resolve;
}
upstream arrayify-backend {
zone arrayify-backend 64k;
server arrayify.default.svc.cluster.local resolve;
}
upstream lower-backend {
zone lower-backend 64k;
server lower.default.svc.cluster.local resolve;
}
upstream upper-backend {
zone upper-backend 64k;
server upper.default.svc.cluster.local resolve;
}
server {
listen 80;
root /data;
location / {
index index.html index.htm;
}
status_zone backend-servers;
location /reverse/ {
proxy_pass http://reverse-backend/;
}
location /arrayify/ {
proxy_pass http://arrayify-backend/;
}
location /lower/ {
proxy_pass http://lower-backend/;
}
location /upper/ {
proxy_pass http://upper-backend/;
}
}
server {
listen 8080;
root /usr/share/nginx/html;
location = /status.html { }
location /status {
status;
}
}
Everything seem to be working fine with NGINX Plus and I am able to hit all 4 micro-services with with url eg. http://x.y.z.w/service[1|2|3|4]/?str=testnginx , where http://x.y.z.w is my external ip and NGINX is taking care of routing requests internally. Now I am willing to do the same work without NGINX Plus by using NGINX only.
Below are the updated files for NGINX :
Dockerfile (Updated for NGINX) :
FROM debian:8.3
RUN apt-get update && apt-get -y install wget
RUN mkdir -p /etc/ssl/nginx && wget -P /etc/ssl/nginx https://cs.nginx.com/static/files/CA.crt
#COPY nginx-repo.key /etc/ssl/nginx/nginx-repo.key
#COPY nginx-repo.crt /etc/ssl/nginx/nginx-repo.crt
#RUN wget http://nginx.org/keys/nginx_signing.key && apt-key add nginx_signing.key
RUN apt-get -y install apt-transport-https libcurl3-gnutls lsb-release
#RUN printf "deb https://plus-pkgs.nginx.com/debian `lsb_release -cs` nginx-plus\n" | tee /etc/apt/sources.list.d/nginx-plus.list
RUN wget -P /etc/apt/apt.conf.d https://cs.nginx.com/static/files/90nginx
RUN apt-get update && apt-get -y install nginx
RUN mkdir /data
COPY index.html /data/index.html
COPY nginx.conf /etc/nginx/conf.d/backend.conf
#RUN rm /etc/nginx/conf.d/default.conf
CMD ["nginx", "-g", "daemon off;"]
nginx.conf (Updated for NGINX) :
resolver 10.3.240.10 valid=5s;
upstream reverse-backend {
zone reverse-backend 64k;
server reverse.default.svc.cluster.local;
}
upstream arrayify-backend {
zone arrayify-backend 64k;
server arrayify.default.svc.cluster.local;
}
upstream lower-backend {
zone lower-backend 64k;
server lower.default.svc.cluster.local;
}
upstream upper-backend {
zone upper-backend 64k;
server upper.default.svc.cluster.local;
}
server {
listen 80;
root /data;
location / {
index index.html index.htm;
}
# status_zone backend-servers;
location /reverse/ {
proxy_pass http://reverse-backend/;
}
location /arrayify/ {
proxy_pass http://arrayify-backend/;
}
location /lower/ {
proxy_pass http://lower-backend/;
}
location /upper/ {
proxy_pass http://upper-backend/;
}
}
#server {
# listen 8080;
#
# root /usr/share/nginx/html;
#
# location = /status.html { }
#
# location /status {
# status;
# }
#}
Basically, I have removed resolve and server statements, which are the features of NGINX Plus and able to create docker image, uploading it on google container and creating my deployments and service but getting 404 not found.
Am I missing something here or this is the limitation of NGINX ?
Please suggest if anyone has any suggestion or prior experience working with NGINX, Docker and Kubernetes on Google Cloud.

uwsgi ini file configuration

I have a Python application running on a server. I am trying to configure uwsgi server in my application.
uwsgi.conf file location : /etc/init/uwsgi.conf
# file: /etc/init/uwsgi.conf
description "uWSGI starter"
start on (local-filesystems and runlevel [2345])
stop on runlevel [016]
respawn
# home - is the path to our virtualenv directory
# pythonpath - the path to our django application
# module - the wsgi handler python script
exec /home/testuser/virtual_environments/django-new/bin/uwsgi \
--uid root \
--home /home/testuser/virtual_environments/django-new \
--pythonpath /home/testuser/django \
--socket /tmp/uwsgi.sock \
--chmod-socket \
--module wsgi \
--logdate \
--optimize 2 \
--processes 2 \
--master \
--logto /var/log/uwsgi.log
and I have created this .ini file:
/etc/uwsgi/app-available/uwsgi.ini
[uwsgi]
home = /home/testuser/virtual_environments/django-new
pythonpath = /home/testuser/django
socket = /tmp/uwsgi1.sock
module = wsgi
optimize = 2
processes = 2
nginx configuration:
/etc/nginx/site-available/default
upstream uwsgicluster {
#server unix:/tmp/uwsgi.sock;
server unix:///tmp/uwsgi1.sock;
}
server {
location / {
uwsgi_pass uwsgicluster;
#uwsgi_pass unix:/run/uwsgi/app/scisphere/socket;
#proxy_pass http://uwsgicluster;
include /etc/nginx/uwsgi_params;
}
location /static {
root /home/testuser/django/main;
}
location /media {
root /home/testuser/django;
}
}
I try to start the server:
sudo service uwsgi start
I am getting 502 Bad Gateway error. How to configure uwsgi.ini file in uwsgi.conf?
Add include uwsgi_params; below uwsgi_pass uwsgicluster;.

Resources