I'm running a basic wordpress server on Linode.com, with Ubuntu 14.04 and nginx implementations. About two weeks ago, the server began crashing. A server reboot fixes the issue, but after about five hours it only server the "An error occured." page from nginx. The following error shows up in the error log:
2015/12/17 19:53:12 [error] 3183#0: *13129 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 46.166.139.20, server: example.com, request: "POST /xmlrpc.php HTTP/1.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "45.79.95.93"
Is this possibly an issue with the config files, or maybe with the host?
your php5-fpm service has stopped, due to this its socket file is not available in /var/run location. Please user below command to restart you php5-fpm service
service php5-fpm restart
Related
I have a running Scrapyd Instance. This instance has been cloned and is now up and running under another server IP. The cloned server workes just fine, except that I can no longer deploy to the new IP.
I am getting
retry_on_eintr(check_call, [sys.executable, 'setup.py', 'clean', '-a', 'bdist_egg', '-d', d],
Deploying to project "test" in http://myip:6843/addversion.json
Deploy failed (504):
<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
Nginx config looks ok (localhost). UFW as well. Ips are correct. The web interface is reachable and so forth. Just deploy failes.
Nginx error log:
[error] 1180#1180: *62 upstream timed out (110: Connection timed out) while reading response header from upstream, client: myip, server: , request: "POST /addversion.json HTTP/1.1", upstream: "http://127.0.0.1:6800/addversion.json", host: "myip:6843"
What am I missing?
Found the problem. Scrapy was trying to reach a remote MySQL server which was blocked for this IP.
I'm trying to configure rtorrent with xmlrpc using nginx as the web server. I'm running into an issue right now where, when I run this command:
xmlrpc localhost/rpc system.listMethods
I get a 502. nginx logs this error:
connect() to unix:/tmp/scgi.socket failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "POST /rpc/RPC2 HTTP/1.1", upstream: "scgi://unix:/tmp/scgi.socket:", host: "localhost"
I'm setting the permissions for the socket properly (I think). I've been working on this for a bit, and would appreciate another set of eyes. You can find all the conf files and code here: https://github.com/nVitius/rtorrent-docker
Also, dockerhub link:
https://hub.docker.com/r/nvitius/rtorrent-docker/
After looking at it again this morning, I found that the issue was that rtorrent wasn't picking up the configuration file. I specified the path to it manually, and it works now.
I have deployed my app on EC2 using the software in the title, but I am getting an Internal Server Error. Here is the tutorial I have been following.
Here is the error log for me trying to get on the application via the browser:
2014/02/17 19:48:29 [error] 26513#0: *1 connect() to unix:/tmp/uwsgi.sock failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: localhost, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/tm p/uwsgi.sock:", host: "ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com"
If your Ansible playbook is based on Matt Wright's tutorial, then all you need to do is reboot after the installation. The playbook doesn't update supervisor with the new program it installs (which is actually the upstream uWSGI referred to by the log), so the program cannot be started.
I've followed https://www.mercurial-scm.org/wiki/HgWebDirStepByStep to get "hg serve" running over CGI - but it's not quite working.
Here is the command I'm using to spawn the CGI:
spawn-fcgi -a 127.0.0.1 -p 9000 -f /path/to/hgweb.cgi -P /tmp/fcgi.pid 2>&1
The output suggests that the process spawned successfully, but a ps -p reveals that the process has already closed down. Sure enough, when I run the above command with -n, it spits out a load of HTML (the list of repositories) and then quits. Isn't it meant to stick around, listening on port 9000?
Telnetting to port 9000 gives "Connection refused" and this appears to be the problem nginx is having also:
2012/02/15 22:16:20 [error] 13483#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: emily, request: "GET /hg/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost:8001"
I'm confident my nginx config is correct, although I can post it here if you need to take a look.
Thanks for any help :)
Hi I'm trying to move my old dev environment to a new machine. However I keep getting "bad gateway errors" from nginx. From nginx's errorlog:
*19 kevent() reported that connect() failed (61: Connection refused) while connecting to upstream, client: 127.0.0.1, server: ~(?<app>[^.]+).gp2, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "backend.gp2:5555"
Does anyone know why this is?
Thanks!
Turned out that PHP-fpm was not running
Looks like your upstream host at 127.0.0.1:9000 is not accepting connections. Is the upstream process working?
You seem to have nginx configured as a proxy, that tries to proxy its requests to localhost on port 9000, but cannot find anything listening on port 9000.
In my workstation, running php works for me. Take note that I'm using PHP 7.4 in my Mac. Pls adjust the PHP version according to what is installed in your workstation.
Working command:
sudo brew services start php#7.4
Please start your varnish:
sudo varnishd -a 127.0.0.1:80 -T 127.0.0.1:6082 -f /usr/local/etc/varnish/default.vcl -s file,/tmp,500M