I'm getting this error from /var/log/messages on my FreeBSD box. I'm using nginx and spawn-fcgi with memcache and apc modules enabled.
upstream prematurely closed connection while reading response header from upstream,
client HTTP/1.1", upstream: "fastcgi://unix:/tmp/fcgi.sock:", host:
I've had a similar error with unicorn + nginx.
The end result was that the unicorn was timing out due to a firewall misconfiguration, dieing off and leaving NGINX clueless as to what to do (nginx would then throw a 503).
Once the port was open my issue was resolved.
I've also seen this happen with an API call that takes a long time (longer than my 30s unicorn timeout). I ended up shipping it off to a background job so the unicorn didn't timeout.
I had a similar issue with Nginx timing out with a RoR app when using an EC2 + Amazon RDS database instance.
The issue was resolved by editing my security group for the RDS instance to allow the EC2's IP over port 5432. Just edit the security group's rules to add a custom rule for the port you are communicating to the RDS instance over, and whitelist the EC2 server's private IP address. Worked instantly after that!
It was related to the version of PHP. I have used latest version of nginx and slightly old version of PHP. The issue has been fixed by updating PHP to latest version.
Related
We are facing 502 error at our IIS web server, which used to work fine up to recently.
We had to change our reverse proxy machine: went from Ubuntu 18.04 to Ubuntu 22.04, and in this proccess the NGINX version changed from 1.21.3 to 1.18.0.
Right now, with NGINX 1.18.0, the 502 occurs. At NGINX log we see:
peer closed connection in SSL handshake (104: Unknown error) while SSL handshaking to upstream
The connection is being served with TLS v1.3, as informed by Mozilla Firefox 101.0.1 (64-bits).
We're working with a Windows Server 2012, which only accepts TLS up to v1.2 (https://learn.microsoft.com/en-us/windows/win32/secauthn/protocols-in-tls-ssl--schannel-ssp-).
At upstream log there is nothing. If we go back to older NGINX the connection goes without problem (TLS is v1.2). So the problem seems to be at NGINX. We suspect it to be from TLS v1.3 (as per link provided), but couldn't manage to solve this.
We tried updating "nginx.conf" ssl_protocols TLSv1 TLSv1.1; but it didn't work, we still get TLS v1.3. We also tried to update NGINX (apt update and no version change) and some settings (proxy_ssl_name and proxy_ssl_server_name and proxy_pass) with no success either.
Any ideas on how to solve this?
Any ideas on how to solve this?
We're working with a Windows Server 2012, which only accepts TLS up to v1.2
Yup, you identified the problem right there.
Win8 won't support TLS1.3.
Yet you have a business need to communicate over TLS1.3.
Sounds like you want to swap some portion of your deployed software stack.
You could revert to your old Bionic Beaver,
but you're likely better off sticking with Jammy.
Time has moved on.
You want your stack components to keep up with a changing Internet.
We understood that the reason for the 502 error was NGINX didn't accept a private certificate while proxying the connection (our IIS had a private certificate - locally generated).
This case was closed by removing SSL from internal segment (between nginx and IIS), while keeping SSL v1.3 from the user/client to NGINX segment.
When invoking other services through HTTP in my app, parsing DNS is occurring every time. At the beginning, parsing DNS was normal and there was no timeout, but after a while, parsing DNS timeouts became more and more. Take out the timeout domain names separately, and use the 'dig' command to parse in the Linux environment, they are all normal. In my nginx.conf file, resolver_timeout 60s (default 30s), resolver 8.8.8.8. My app is deployed with openresty.How do I check?
I'm seeing a weird situation where either Nginx or uwsgi seems to be building up a long queue of incoming requests, and attempting to process them long after the client connection timed out. I'd like to understand and stop that behavior. Here's more info:
My Setup
My server uses Nginx to pass HTTPS POST requests to uWSGI and Flask via a Unix file socket. I have basically the default configurations on everything.
I have a Python client sending 3 requests per second to that server.
The Problem
After running the client for about 4 hours, the client machine started reporting that all the connections were timing out. (It uses the Python requests library with a 7-second timeout.) About 10 minutes later, the behavior changed: the connections began failing with 502 Bad Gateway.
I powered off the client. But for about 10 minutes AFTER powering off the client, the server-side uWSGI logs showed uWSGI attempting to answer requests from that client! And top showed uWSGI using 100% CPU (25% per worker).
During those 10 minutes, each uwsgi.log entry looked like this:
Thu May 25 07:36:37 2017 - SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request /api/polldata (ip 98.210.18.212) !!!
Thu May 25 07:36:37 2017 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 296] during POST /api/polldata (98.210.18.212)
IOError: write error
[pid: 34|app: 0|req: 645/12472] 98.210.18.212 () {42 vars in 588 bytes} [Thu May 25 07:36:08 2017] POST /api/polldata => generated 0 bytes in 28345 msecs (HTTP/1.1 200) 2 headers in 0 bytes (0 switches on core 0)
And the Nginx error.log shows a lot of this:
2017/05/25 08:10:29 [error] 36#36: *35037 connect() to unix:/srv/my_server/myproject.sock failed (11: Resource temporarily unavailable) while connecting to upstream, client: 98.210.18.212, server: example.com, request: "POST /api/polldata HTTP/1.1", upstream: "uwsgi://unix:/srv/my_server/myproject.sock:", host: "example.com:5000"
After about 10 minutes the uWSGI activity stops. When I turn the client back on, Nginx happily accepts the POST requests, but uWSGI gives the same "writing to a closed pipe" error on every request, as if it's permanently broken somehow. Restarting the webserver's docker container does not fix the problem, but rebooting the host machine fixes it.
Theories
In the default Nginx -> socket -> uWSGI configuration, is there a long queue of requests with no timeout? I looked in the uWSGI docs and I saw a bunch of configurable timeouts, but all default to around 60 seconds, so I can't understand how I'm seeing 10-minute-old requests being handled. I haven't changed any default timeout settings.
The application uses almost all the 1GB RAM in my small dev server, so I think resource limits may be triggering the behavior.
Either way, I'd like to change my configuration so that requests > 30 seconds old get dropped with a 500 error, rather than getting processed by uWSGI. I'd appreciate any advice on how to do that, and theories on what's happening.
This appears to be an issue downstream on the uWSGI side.
It sounds like your backend code may be faulty in that it takes too long to process the requests, does not implement any sort of rate limiting for the requests, and does not properly catch if any of the underlying connections have been terminated (hence, you're receiving the errors that your code tries to write to closed pipelines, and possibly even start processing new requests long after the underlying connections have been terminated).
As per http://lists.unbit.it/pipermail/uwsgi/2013-February/005362.html, you might want to abort processing within your backend if not uwsgi.is_connected(uwsgi.connection_fd()).
You might want to explore https://uwsgi-docs.readthedocs.io/en/latest/Options.html#harakiri.
As last resort, as per Re: Understanding "proxy_ignore_client_abort" functionality (2014), you might want to change uwsgi_ignore_client_abort from off to on in order to not drop the ongoing uWSGI connections that have already been passed to the upstream (even if the client does subsequently disconnect) in order to not receive the closed pipe errors from uWSGI, as well as to enforce any possible concurrent connection limits within nginx itself (otherwise, the connections to uWSGI will get dropped by nginx should the client disconnect, and nginx would have no clue how many requests are being queued up within uWSGI for subsequent processing).
Seems like DoS attack on Nginx uWSGI returning 100% CPU usage with Nginx 502, 504, 500. IP spoofing is common in DoS attack. Exclude by checking the logs.
How do I proxy requests to NTLM-protected websites, like TeamFoundation and SharePoint? I keep getting 401 authentication errors.
According to this Microsoft TechNet article, you can't.
Microsoft NTLM uses stateful HTTP, which is a violation of the HTTP/1.1 RFC. It relies on authentication (an affair which involves a handshake with a couple of initial 401 errors) and subsequent connections to be done through the exact same connection from client to server. This makes HTTP proxying nearly impossible, since each request would usually go through either a new or a random connection picked from a pool of open connections. It can be done though.
NGiNX apparently supports this through the "ntlm" option, but this is part of their commercial offering. Apache HTTPD seems to have a couple of experimental patches for this, but this requires rebuilding Apache. TinyProxy doesn't support this either. HAProxy to the rescue!
Here is an example of a running configuration which works - it's a fairly simple setup with a single backend server:
backend backend_tfs
server static teamfoundation.mycompany.com:8080 check maxconn 3
mode http
balance roundrobin
option http-keep-alive
option prefer-last-server
timeout server 30s
timeout connect 4s
frontend frontend_tfs
# You probably want something other than 127.0.0.1 here:
bind 127.0.0.1:8080 name frontend_tfs
mode http
option http-keep-alive
timeout client 30s
default_backend backend_tfs
The important options here are http-keep-alive and prefer-last-server.
One more thing for my scenerio;
If you are using ssl both sides(the iis servers and haproxy), the ssl must be same for iis and haproxy server. Otherwise ntlm doesn't work when you want to go iis from haproxy.
Maybe can help someone who has the same problem.
We are using nginx for https traffic offloading, proxying to a locally installed jasperserver (5.2) running on port 8080.
internet ---(https/443)---> nginx ---(http/8080)---> tomcat/jasperserver
When accessing the jasperserver directly on its port everything is fine. When accessing the service through nginx some functionalities are broken (e.g. editing a user in the jasperserver UI) and the jasperserver log has entries like this:
CSRFGuard: potential cross-site request forgery (CSRF) attack thwarted (user:%user%, ip:%remote_ip%, uri:%request_uri%, error:%exception_message%)
After some debugging we found the cause for this:
In its standard configuration nginx is not forwarding request headers that contain underscores in their name. Jasperserver (and the OWASP framework) however default to using underscores for transmitting the csrf token (JASPER_CSRF_TOKEN and OWASP_CSRFTOKEN respectively).
Solution is to either:
nginx: allow underscores in headers
server {
...
underscores_in_headers on;
jasperserver: change token configuration name in jasperserver-pro/WEB-INF/esapi/Owasp.CsrfGuard.properties
Also see here:
header variables go missing in production
http://wiki.nginx.org/HttpCoreModule#underscores_in_headers
Answered it myself - hopefully this is of some use to others,too
I had this issue with Jasperserver 5.5 AWS AMI
More specific:
/var/lib/tomcat7/webapps/jasperserver-pro/WEB-INF/esapi/Owasp.CsrfGuard.properties
Change:
org.owasp.csrfguard.TokenName=JASPER_CSRF_TOKEN
org.owasp.csrfguard.SessionKey=JASPER_CSRF_SESSION_KEY
To:
org.owasp.csrfguard.TokenName=JASPERCSRFTOKEN
org.owasp.csrfguard.SessionKey=JASPERCSRFSESSIONKEY
My version of Jasperserver looked slightly different, the CSRFguard files are located in jasperserver/WEB-INF/csrf
I edited the jrs.csrfguard.properties file.