Nginx + FastGgi++ - nginx

I'm new in web development, and I need to know how to configure Nginx server to use it with FastCgi++?
I tried to use this examples, but I don't nderstand what exact I need to compile for FastCgi++, what I need for creating FastCgi++ process, what I need to pass in Nginx, etc.
Please, help me to find some step-by-step instruction/tutorial to solve my problem.

First get fastcgipp lib , find an example say from
http://www.nongnu.org/fastcgipp/doc/1.2/helloWorld.html
build your program by linking to fastcgipp lib.
Then two steps are needed
A)Inside nginx config, say...
/etc/nginx/sites-available$ ls
default
/etc/nginx/sites-available$ vi default
server { location / {
fastcgi_pass localhost:9000; <<< ------ ADD SOMETHING LIKE THIS
# fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
}
B) install spawn-fcgi, and use as below ... where u r application is mySrv
spawn-fcgi -p 9000 -fn mySrv
Now use u r browser to go to localhost (if anything else is not configured)

Related

uWSGI: How can I mount a paste-deploy (Pyramid) app?

What I have:
I have a Pyramid application that is built from a Paste ini, served by uWSGI and proxied by nginx. It works great. Here is the nginx config:
server {
listen 80;
server_name localhost;
access_log /var/log/myapp/nginx.access.log;
error_log /var/log/myapp/nginx.error.log warn;
location / {
uwsgi_pass localhost:8080;
include uwsgi_params;
}
}
Here is the uWSGI ini configuration:
[uwsgi]
socket = 127.0.0.1:8080
virtualenv = /srv/myapp/venv
die-on-term = 1
master = 1
logto = /var/log/myapp/uwsgi.log
This configuration is located inside Pyramid's production.ini, such that I serve the application with this command:
uwsgi --ini-paste-logged production.ini
All of this works just fine.
What I want to do:
One simple change. I want to serve this application as a subfolder, rather than as the root. Rather than serving it from http://localhost, I want to serve it from http://localhost/myapp.
And now everything is broken.
If I change the nginx location directive from / to /myapp or /myapp/, I get 404s, because the WSGI application receives uris that are all prepended with /myapp.
The uWSGI solution appears to be to mount the WSGI callable on the subfolder, and then pass the --manage-script-name option, at which point uWSGI should magically strip the subfolder prefix from the uri and fix the issue.
However, the documentation and every other resource I've found have only given examples of the form:
mount = /myapp=myapp.py
I don't have a myapp.py that contains a WSGI callable, because my callable is being built by PasteDeploy.
So, is it possible to mount the WSGI callable from within the Paste ini? Or am I going to have to split the uwsgi configuration out of the Paste ini and also define a separate wsgi.py with a call to paste.deploy.loadapp to generate a wsgi callable that I can mount?
Or is there another way to serve this app as a subfolder from nginx while not messing up the url reversing?
Yes, it's definitely possible to mount your Pyramid as a subdirectory with Nginx. What you'll need to use is the Modifier1 option from uWSGI like so:
location /myapp {
include uwsgi_params;
uwsgi_param SCRIPT_NAME /myapp;
uwsgi_modifier1 30;
uwsgi_pass localhost:8080;
}
The magic value of 30 tells uWSGI to remove the parameter of SCRIPT_NAME from the start of PATH_INFO in the request. Pyramid receives the request and processes it correctly.
As long as you're using the standard Pyramid machinery to generate URLs or paths within your application, SCRIPT_NAME will automatically be incorporated, meaning all URLs for links/resources etc are correct.
The documentation isn't the clearest, but there's more on the modifiers available at: https://uwsgi-docs.readthedocs.org/en/latest/Protocol.html
I wanted to do what you suggest but this is the closest solution I could find: if you are willing to modify your PasteDeploy configuration, you can follow the steps at: http://docs.pylonsproject.org/docs/pyramid/en/1.0-branch/narr/vhosting.html
Rename [app:main] to [app:mypyramidapp] and add a section reading:
[composite:main]
use = egg:Paste#urlmap
/myapp = mypyramidapp
I also had to add this to my nginx configuration:
uwsgi_param SCRIPT_NAME '';
and install the paste module
sudo pip3 install paste
I wonder if there is a way to "mount" a PasteDeploy as to original question asked...
I've hit this very problem with my deployment after switching from Python2 to Python3.
with Python2 I used the uwsgi_modifier1 30; trick, but it doesn't work anymore with Python3, as described here: https://github.com/unbit/uwsgi/issues/876
It is very badly documented (not at all? I know it from reading the uWSGI source code), but --mount option accepts the following syntax:
--mount=/app=config:/path/to/app.ini
Please note: with --mount you also need --manage-script-name option.
There are other problems with it: https://github.com/unbit/uwsgi/issues/2172
It's trivial to write a wrapper script around Paste-Deploy app, which is the way I deploy now:
from paste.script.util.logging_config import fileConfig as configure_logging
from paste.deploy import loadapp as load_app
from os import environ
config_file = environ['INI_FILE']
configure_logging(config_file)
application = load_app('config:' + config_file)
Save it to e.g. app.py and you can use it with --mount /app=app.py, the INI_FILE environment var should point to your .ini file.
As a side note - I consider moving away from uWSGI, it's buggy and documentation lacks a lot.

HHVM 502 Bad gateway - Fedora 21

G'day.
I have fedora 21, HHVM version 3.7. My issue unfortunately is that I can start the service, and I can access my pages no issue. However if I consistently refresh the page, the HHVM crashes and upon checking the status, it returns this error:
The HHVM error log returns:
Unable to open pid file /var/run/hhvm/pid for write
Now I can restart the server and it works fine, but only after a hand full of requests it will crash as above.
PHP-FPM is not running and nothing except HHVM is running on port 9000
Here is some config info
HHVM - server.ini
; php options
pid = /var/run/hhvm/pid
; hhvm specific
hhvm.server.port = 9000
hhvm.server.type = fastcgi
hhvm.server.source_root = /srv/www
hhvm.server.default_document = index.php
hhvm.log.level = Error
hhvm.log.use_log_file = true
hhvm.log.file = /var/log/hhvm/error.log
hhvm.repo.central.path = /var/run/hhvm/hhvm.hhbc
HHVM - service
[Unit]
Description=HipHop Virtual Machine (FCGI)
[Service]
ExecStart=/usr/bin/hhvm --config /etc/hhvm/server.ini --user hhvm --mode daemon -vServer.Type=fastcgi -vServer.Port=9000
PrivateTmp=true
[Install]
WantedBy=multi-user.target
NGINX - site file
##NGINX STUFF
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index bootstrap.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
##MORE NGINX STUFF
So from the info provided, is there any hint as to what could be the issue?
Cheers guys.
This is a very simple permission issue like your log mentioned. You have no access to the pid folder to generate the pid file.
sudo chmod -R 777 /var/run/hhvm
I had the same problem on Ubuntu.
HHVM Unable to read pid file /var/run/hhvm/pid for any meaningful pid after reboot
Another problem when you have a lot of requests can be the max open files limit. When you come over the limit HHVM crashes. Normally you should see that error in your log and you can increase that limit.
https://serverfault.com/questions/679408/hhvm-exit-after-too-many-open-files
Here is my question on ServerFault.

Getting random errors when setting up Joomla with nginx instead of apache

I'm trying to set up a joomla 3 instance on my server where I am already using nginx together with owncloud as well as the blogging platform ghost.
My first attempt was actually quite successful and it only failed in the last installation step (creating configuration files). I though this was due to wrong permissions, that the file couldn't be created. I wrote a short test script to verify if php5-fpm had write permissions in the folder, and it worked.
After several failed retries and no log files I decided to delete the directory and download Joomla again. Since than, nothing works. After every time I unpack the zip (freshly downloaded or the same) I get following arbitrary error scenarios:
I get redirected to installation/installation/index.php instead of installation/index.php
I had errors about missing php files
I had errors about missing php classes:
JApplicationBase
JApplicationWebClient
some view class
...
After every unzip and re-download the error changes even though I don't change anything on the nginx or php5-fpm config.
After downloading and extracting the files I use the following command to set up the Joomla-directory properly:
sudo chown -R joomla_user .
optional, only if I downloaded and extracted the zip with another user - you see I really tried every possible combination
sudo chgrp -R www-data .
nginx runs as www-data but joomla_user isn't in the www-data group.
The files and folders are only readable for nginx, but not writable. I thought this isn't a problem since the writes are done by php anyway
sudo chmod -R g+s .
to make sure that all future uploaded files will be readable by nginx
my nginx config in sites-available (and sites-enabled) looks like this:
server {
listen 80;
server_name joomla.server_url;
root /home/joomla_user/www/joomla3;
index index.php index.html index.htm default.html default.htm;
# Support Clean (aka Search Engine Friendly) URLs
location / {
try_files $uri $uri/ /index.php?$args;
}
# deny running scripts inside writable directories
location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ {
return 403;
error_page 403 /403_error.html;
}
location ~ \.php$ {
fastcgi_pass unix:/var/run/php5-fpm-joomla_user.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
# caching of files
location ~* \.(ico|pdf|flv)$ {
expires 1y;
}
location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ {
expires 14d;
}
}
the php5-fpm pool-config is basically copy paste of the default config with a changed socket name and name.
In summary again - php5 execution works, permissions allow also creating and writing of files (at least in those directories I checked), however after the installation didn't finish in the beginning, now I am getting really random error messages after every time I unzip the joomla3 zip file, even when I download id fresh (and directly to the server via wget) from their website (http://www.joomla.org/download.html).
Does anyone have experience using Joomla on top of nginx? Any idea how I could get rid of those errors and make it run?
Update:
My PHP version is 5.4.4:
PHP 5.4.4-14+deb7u8 (cli) (built: Feb 17 2014 09:18:47)
Copyright (c) 1997-2012 The PHP Group
Zend Engine v2.4.0, Copyright (c) 1998-2012 Zend Technologies
Also yesterday I was talking with a Joomla developer about the problem, they suggested directory permission problems, but ist still exists even after executing chmod -R u+rw . in the Joomla directory.
I didn't manage to get rid of the errors, but got the suggestion to use the tuxlite-script. Running ./domain.sh add joomla JOOMLA_SERVER_URL created a new config with all the necessary directories. The nginx-config also adds an SSL section which referenced in my case wrong certificate files. After fixing that, Joomla was again up and running.
I still had the first problem with - Joomla did't finish the installation. This was due to a too short fastcgi_read_timeout (the default 60 seconds). Changing it to few more minutes made it work.
The last configuration I changed was in joomla's nginx configuration:
location / {
try_files $uri $uri/ /index.php?$args;
}
was changed to
location / {
try_files $uri $uri/ /index.php?q=$request_uri;
}
as it is described in the Joomla documentation for nginx.

How can I configure nginx and fcgi to call separate executables depending on request-uri

Say I have a directory structure like this:
/index
/contact
/view_post
All three are executables that just output html using something basically like echo-cpp from fcgi examples.
The documentation I've read have just shown how to have one program that then parses the request-uri and calls various sections from that. I want to be able to have each of these as separate programs instead of parsing for a request uri and serving the page based on that.
So if I went to localhost/index the index program would be ran with input to it (post data) and its output would go to nginx to serve up the page.
I'm not sure if fcgi is even the right tool for this, so if something else would work better then that is fine.
You can do it with nginx and fcgi. The simplest way to do this is, by using spawn-fcgi -
First you will need to setup your nginx.conf. Add the following inside the server {} block -
location /index {
fastcgi_pass 127.0.0.1:9000;
include fastcgi_params;
}
location /contact {
fastcgi_pass 127.0.0.1:9001;
include fastcgi_params;
}
location /view_post {
fastcgi_pass 127.0.0.1:9002;
include fastcgi_params;
}
Restart nginx and then run your apps listening same ports as declared in the nginx.conf.
Assuming your programs are in ~/bin/ folder -
~ $ cd bin
~/bin $ spawn-fcgi -p 9000 ./index
~/bin $ spawn-fcgi -p 9001 ./contact
~/bin $ spawn-fcgi -p 9002 ./view_post
Now the requests to localhost/index will forward to your index program and its output will go back to nginx to serve the pages! And the same for contact and view_post!

PHP-FPM serving blank pages after fatal php error

I've got a custom setup of nginx and php-fpm on arch linux. I'll post my configs below. I think I've read the documentation for these two programs front to back about 6 times by now, but I've reached a point where I simply can't squeeze any more information out of the system and thus have nothing left to google. Here's the skinny:
I compiled both nginx and php from scratch (I'm very familiar with this, so presumably no problems there). I've got nginx set up to serve things correctly, which it does consistently: php files get passed through the unix socket (which is both present and read-/write-accessible to the http user, which is the user that both nginx and php-fpm run as), while regular files that exist get served. Calls for folders and calls for files that don't exist are both sent to the /index.php file. All permissions are in order.
The Problem
My pages get served just fine until there's a php error. The error gets dumped to nginx's error log, and all further requests for pages from that specific child process of php-fpm return blank. They do appear to be processed, as evidenced by the fact that subsequent calls to the file with errors continue to dump error messages into the log file, but both flawed and clean files alike are returned completely blank with a 200 status code.
What's almost wilder is that I found if I then just sit on it for a few minutes, the offending php-fpm child process doesn't die, but a new one is spawned on the next request anyway, and the new process serves pages properly. From that point on, every second request is blank, while the other request comes back normal, presumably because the child processes take turns serving requests.
My test is the following:
// web directory listing:
mysite/
--index.php
--bad_file.php
--imgs/
----test.png
----test2.png
index.php:
<?php
die('all cool');
?>
bad_file.php*:
<?php
non_existent_function($called);
?>
* Note: I had previously posted bad_file.php to contain the line $forgetting_the_semicolon = true, but found that this doesn't actually produce the error I'm talking about (this was a simplified example that I've now implemented on my own system). The above code, however, does reproduce the error, as it produces a fatal error instead of a parse error.
test calls from terminal:
curl -i dev.mysite.com/ # "all cool"
curl -i dev.mysite.com/index.php # Redirected to / by nginx
curl -i dev.mysite.com/imgs # "all cool"
curl -i dev.mysite.com/imgs/test.png # returns test.png, printing gibberish
curl -i dev.mysite.com/nofile.php # "all cool"
curl -i dev.mysite.com/bad_file.php # blank, but error messages added to log
curl -i dev.mysite.com/ # blank! noooooooo!!
curl -i dev.mysite.com/ # still blank! noooooooo!!
#wait 5 or 6 minutes (not sure how many - probably corresponds to my php-fpm config)
curl -i dev.mysite.com/ # "all cool"
curl -i dev.mysite.com/ # blank!
curl -i dev.mysite.com/ # "all cool"
curl -i dev.mysite.com/ # blank!
#etc....
nginx.conf:
user http;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type text/plain;
sendfile on;
keepalive_timeout 65;
index /index.php;
server {
listen 127.0.0.1:80;
server_name dev.mysite.net;
root /path/to/web/root;
try_files /maintenance.html $uri #php;
location = /index.php {
return 301 /;
}
location ~ .php$ {
include fastcgi_params;
fastcgi_pass unix:/usr/local/php/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location #php {
include fastcgi_params;
fastcgi_pass unix:/usr/local/php/var/run/php-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root/index.php;
}
}
}
php-fpm.conf:
[global]
pid = run/php-fpm.pid
error_log = log/php-fpm.log
log_level = warning
[www]
user = http
group = http
listen = var/run/php-fpm.sock
listen.owner = http
listen.group = http
listen.mode = 0660
pm = dynamic
pm.max_children = 5
pm.start_servers = 1
pm.min_spare_servers = 1
pm.max_spare_servers = 3
php.ini upon request
In Summary
All pages are served as expected until there's a php error, at which point all subsequent requests to that particular php-fpm child process are apparently processed, but returned as completely blank pages. Errors that occur are reported and continue to be reported in the nginx error log file.
If anyone's got any ideas, throw them at me. I'm dead in the water til I figure this out. Incidentally, if anyone knows of a source of legitimate documentation for php-fpm, that would also be helpful. php-fpm.org appears to be nearly useless, as does php.net's documentation for fpm.
Thanks!
I've been messing with this since yesterday and it looks like it's actually a bug with output buffering. After trying everything, reading everything, going crazy on it, I finally turned off output buffering and it worked fine. I've submitted a bug report here.
For those who don't know, output buffering is a setting in php.ini that prevents php from sending output across the line as soon as it receives it. Not a totally crucial feature. I switched it from 4096 to Off:
;php.ini:
...
;output_buffering = 4096
output_buffering = Off
...
Hope this helps someone else!

Resources