502 Bad Gateway error with nginx and unicorn - nginx

I have nginx 1.4.1 and unicorn set up on centos - I am getting a 502 Bad Gateway error and my nginx logs shows this:
1 connect() to unix:/tmp/unicorn.pantry.sock failed (2: No such file or directory)
while connecting to upstream, client: 192.168.1.192, server: , request: "GET / HTTP/1.1",
upstream: "http://unix:/tmp/unicorn.pantry.sock:/", host: "192.168.1.30"
There is no /tmp/unicorn.pantry.sock file or directory. I am thinking that it might be a permission error and therefore the file can't be written if so who requires what permission - I have also read that I can create a tcp client
Also I don't understand where the 192.168.1.192 comes from
I just want to make it work How can I do it?

Ok I figured this out. I had unicorn.sock in my shared directory so I needed to point unix: to it

Related

How to ban on specific 404 request, using NGinx

My server receives a lot of request for ".well-known/assetlinks.json". I think those request are from hackers.
Here what I get in the error file:
2022/03/07 20:16:02 [error] 44030#44030: *878180 open() "/somefolder/www/public/.well-known/assetlinks.json" failed (2: No such file or directory), client: 82.65.5.229, server: www.myserver.com, request: "GET /.well-known/assetlinks.json HTTP/1.1", host: "myserver.com"
How can I setup NGinx to ban the requester? Maybe a 2 hours ban...
I don't understand how to do it from the documentation.
This would not be a good idea especially as certain webcrawlers may be caught in this trap. It is recommended to leave the behaviour of most errors alone (except from custom error pages)

Nginx: how to avoid logging of not found files

If would like to avoid logging of these usseless rows
Note: these are useless because our server is not running wordpress. So I'd like to simply avoid these lines in log file because we are monitoring error log files sizes and it cause false positive
[error] 22328#22328: *2090 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 77.89.13.110, server: , request: "GET /wp-admin.php HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.4-fpm.sock:", host: ""
how can I exclude these kind of rows?
I resolved adding two location inf my nginx config file, one for both of wp-admin and wp-login calls.
In these I simply disabled logging.
I cannot and do not want to disable logging of 404, because if user reach a 404 there I must understand what is wrong.

NGinx: connect() to unix:/var/run/hhvm/sock failed, No such file

I am Tryig to install FB-CTF which uses HHVM, and NGinx. Everything is set completely by Shell command itself.. but now error log showing
2017/01/18 21:48:17 [crit] 15143#0:
***6 connect() to unix:/var/run/hhvm/sock failed**
(2: No such file or directory) while connecting to upstream,
client: 127.0.0.1, server: ,
request: "GET / HTTP/1.1",
upstream: "fastcgi://unix:/var/run/hhvm/sock:",
host: "localhost"
actually /var/run/hhvm/ contains only hhvm.hhbc.. getting 502 BAD GATEWAY
I reinstalled HHVM. then missing files are reinstalled. all the dependent files are correctly restored.

Trouble using Meteor-Please to deploy to Centos/RHEL

I've been trying for sometime to use the mplz utility to deploy and run a Meteor app. It seems (well now it seemed) simpler and more unified than Meteor Up which said it was targeted towards Debian/Ubuntu distros.
After running a successful mplz setup on a clean CentOS7 image, I cannot access the app. All I have ever gotten is an "nginx error!" page. In the nginx error log I saw this at first:
2016/03/14 17:14:47 [crit] 4997#0: *2 connect() to 127.0.0.1:3000 failed (13: Permission denied) while connecting to upstream, client: myLocalIP, server: domain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "domain.com"
After doing some research I believe I fixed the permissions issue by changing the nginx user and adding the users to the appropriate groups. The website still only displayed the nginx error page, but had a new message in the error_log.
I am now getting a connection refused error:
2016/03/14 18:15:59 [error] 2489#0: *2 connect() failed (111: Connection refused) while connecting to upstream, client: myLocalIp, server: domain.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "domain.com"
All I'm really trying to get done is deploy a persistent copy of my Meteor app to the server. I am not at all familiar with nginx or server-ops kind of stuff, I've mainly worked on existing websites working on features.
I would love any suggestions of how to solve this issue OR how to better or more easily deploy Meteor to a public server.

765 recv() failed (104: Connection reset by peer) while reading response header from upstream

2014/03/31 23:06:50 [error] 25914#0: *765 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 173.77.251.136, server: wiki.resonant-rise.com, request: "POST /index.php?title=Chisel&action=submit HTTP/1.1", upstream: "fastcgi://127.0.0.1:9016", host: "wiki.resonant-rise.com", referrer: "http://wiki.resonant-rise.com/index.php?title=Chisel&action=edit"
2014/03/31 23:06:50 [error] 25914#0: *765 open() "/usr/share/nginx/html/50x.html" failed (2: No such file or directory), client: 173.77.251.136, server: wiki.resonant-rise.com, request: "POST /index.php?title=Chisel&action=submit HTTP/1.1", upstream: "fastcgi://127.0.0.1:9016", host: "wiki.resonant-rise.com", referrer: "http://wiki.resonant-rise.com/index.php?title=Chisel&action=edit"
I have a mediawiki installation and an IPB installation. They both through up errors but this one error from mediawiki prevents me from posting semi-large articles. I have tried a lot of the solutions out there, adding catch_workers_output = yes, adjusting pm.* settings. Still not able to resolve this issue. I am coming to my wits end trying to figure this one out.
PHP-FPM Conf
http://pastie.org/private/aczklc52ll9yv0uz5drcqg
PHP-FPM WWW.CONF
http://pastie.org/private/wod3xipxhm8ractksw7ha
NGINX VHOST for MEDIAWIKI
http://pastie.org/private/h9co8aykbdmfzk2bd5qq
If the failure depends on size of the pages, it has to do with how much work the operation causes. My wild guess would be: increase the timeout (you currently have send_timeout 60s;).
It's easy for the parse time of a very long page to go over 60 seconds, especially if you're on a low power server, have not tuned performance or have enabled heavy parser extensions.
in my case it was that the php version of the project was different with the version of php I had been

Resources