Nginx error recv() failed (104: Connection reset by peer) while proxying connection - nginx

I am using Nginx for HTTP & TCP load balancing. HTTP is working fine, But i am getting below error for TCP load balancing. I am using Nginx ngx_stream_core_module module.
nginx version: nginx/1.9.13
Nginx error Logs :
2016/07/12 07:46:16 [error] 16737#16737: *32006 recv() failed (104: Connection reset by peer) while proxying connection, client: 27.50.X.X, server: 0.0.0.0:3030, upstream: "10.7.0.12:3030", bytes from/to client:354/318, bytes from/to upstream:318/354
2016/07/12 07:48:53 [error] 16737#16737: *32048 recv() failed (104: Connection reset by peer) while proxying connection, client: 27.50.X.X, server: 0.0.0.0:3030, upstream: "10.7.0.12:3030", bytes from/to client:324/292, bytes from/to upstream:292/324
2016/07/12 07:51:40 [error] 16737#16737: *32109 recv() failed (104: Connection reset by peer) while proxying connection, client: 27.50.X.X, server: 0.0.0.0:3030, upstream: "10.7.0.12:3030", bytes from/to client:324/292, bytes from/to upstream:292/324
Can anyone help me to understand why I am getting this errors ?
Is there any way to enable access logs for tcp requests coming to Nginx?

Related

Kong/Nginx comes has bizarre upstream host

Kong/Nginx (I'm not sure which) changes upstream to unexpected IP address. I want it to leave it as localhost. The failing request was made by web browser to localhost:8443/cost-recovery.
Log message:
2020/12/15 16:50:09 [error] 88#0: *522 connect() failed (111: Connection refused) while connecting to upstream, client: 172.19.0.1, server: kong, request: "GET /login?state=/cost-recovery HTTP/1.1", upstream: "https://192.168.65.2:8444/login?state=/cost-recovery", host: "localhost:8443"
I don't know where it's getting the 192.168.65.2 host, but want it to be localhost.
I'm using pantsel/konga container which uses Kong version 1.2.3. Configuration was done in part via proxy api requests:
Service Request:
{"host":"host.docker.internal","created_at":1608009524,"connect_timeout":60000,"id":"647180a3-6f8c-41ae-9f71-c9fc9db40249","protocol":"http","name":"cost-recovery","read_timeout":60000,"port":8447,"path":null,"updated_at":1608009524,"retries":5,"write_timeout":60000,"tags":null}
Route Request:
{"id":"24100a1d-c679-46b7-93f3-552b055df26b","tags":null,"paths":["\/cost-recovery"],"destinations":null,"protocols":["https"],"created_at":1608009525,"snis":null,"hosts":null,"name":"cost-recovery-route","preserve_host":true,"regex_priority":0,"strip_path":false,"sources":null,"updated_at":1608009525,"https_redirect_status_code":302,"service":{"id":"647180a3-6f8c-41ae-9f71-c9fc9db40249"},"methods":["GET"]}
Plugin Request:
{"name": "access-validator", "protocols": ["https"], "route": { "id": "24100a1d-c679-46b7-93f3-552b055df26b"}, "config": {"redirect_login": true}}

Getting nonstop nginx code 11: Resource not available

I'm running a basic wordpress server on Linode.com, with Ubuntu 14.04 and nginx implementations. About two weeks ago, the server began crashing. A server reboot fixes the issue, but after about five hours it only server the "An error occured." page from nginx. The following error shows up in the error log:
2015/12/17 19:53:12 [error] 3183#0: *13129 connect() to unix:/var/run/php5-fpm.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 46.166.139.20, server: example.com, request: "POST /xmlrpc.php HTTP/1.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "45.79.95.93"
Is this possibly an issue with the config files, or maybe with the host?
your php5-fpm service has stopped, due to this its socket file is not available in /var/run location. Please user below command to restart you php5-fpm service
service php5-fpm restart

xmlrpc over nginx w/ scgi

I'm trying to configure rtorrent with xmlrpc using nginx as the web server. I'm running into an issue right now where, when I run this command:
xmlrpc localhost/rpc system.listMethods
I get a 502. nginx logs this error:
connect() to unix:/tmp/scgi.socket failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "POST /rpc/RPC2 HTTP/1.1", upstream: "scgi://unix:/tmp/scgi.socket:", host: "localhost"
I'm setting the permissions for the socket properly (I think). I've been working on this for a bit, and would appreciate another set of eyes. You can find all the conf files and code here: https://github.com/nVitius/rtorrent-docker
Also, dockerhub link:
https://hub.docker.com/r/nvitius/rtorrent-docker/
After looking at it again this morning, I found that the issue was that rtorrent wasn't picking up the configuration file. I specified the path to it manually, and it works now.

Nginx, Ansible, and uWSGI with Flask App, Internal Server Error

I have deployed my app on EC2 using the software in the title, but I am getting an Internal Server Error. Here is the tutorial I have been following.
Here is the error log for me trying to get on the application via the browser:
2014/02/17 19:48:29 [error] 26513#0: *1 connect() to unix:/tmp/uwsgi.sock failed (111: Connection refused) while connecting to upstream, client: xxx.xxx.xxx.xxx, server: localhost, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/tm p/uwsgi.sock:", host: "ec2-xx-xxx-xx-xxx.compute-1.amazonaws.com"
If your Ansible playbook is based on Matt Wright's tutorial, then all you need to do is reboot after the installation. The playbook doesn't update supervisor with the new program it installs (which is actually the upstream uWSGI referred to by the log), so the program cannot be started.

hgweb.cgi and nginx - "Connection refused"

I've followed https://www.mercurial-scm.org/wiki/HgWebDirStepByStep to get "hg serve" running over CGI - but it's not quite working.
Here is the command I'm using to spawn the CGI:
spawn-fcgi -a 127.0.0.1 -p 9000 -f /path/to/hgweb.cgi -P /tmp/fcgi.pid 2>&1
The output suggests that the process spawned successfully, but a ps -p reveals that the process has already closed down. Sure enough, when I run the above command with -n, it spits out a load of HTML (the list of repositories) and then quits. Isn't it meant to stick around, listening on port 9000?
Telnetting to port 9000 gives "Connection refused" and this appears to be the problem nginx is having also:
2012/02/15 22:16:20 [error] 13483#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: emily, request: "GET /hg/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost:8001"
I'm confident my nginx config is correct, although I can post it here if you need to take a look.
Thanks for any help :)

Resources