Increase of requests count end up with 'Error: socket hang up' - http

Backend service/nginx proxy starts responding 'Error: socket hang up' when there is increased count of requests. The setup is as follows.
OS: CentOS 6
Express JS service -> nginx as a proxy -> flask app run by Gunicorn
JS app sends multiple requests at the same time to the other service, when the request count exceed ~100 it starts to return error responses. If the count is lower everything works fine.
I have followed example configuration of nginx which is in Gunicorn documentation + increasing timeout limits + increasing nginx open files limit. I have also tried keepalive option but the issue still remains. Gunicorn doesn't show any errors.
nginx configuration fragment:
upstream app_server {
server 127.0.0.1:8000 fail_timeout=0;
keepalive 100;
}
server {
listen 5001;
client_max_body_size 4G;
keepalive_timeout 300;
root /path/to/app/current/public; # static files
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
# Timeouts
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
send_timeout 300;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
Error response received from proxy:
{ RequestError: Error: socket hang up
at new RequestError (/home/pm2deploy/apps/app-backend/source/node_modules/request-promise-core/lib/errors.js:14:15)
at Request.plumbing.callback (/home/pm2deploy/apps/app-backend/source/node_modules/request-promise-core/lib/plumbing.js:87:29)
at Request.RP$callback [as _callback] (/home/pm2deploy/apps/app-backend/source/node_modules/request-promise-core/lib/plumbing.js:46:31)
at self.callback (/home/pm2deploy/apps/app-backend/source/node_modules/request/request.js:185:22)
at Request.emit (events.js:160:13)
at Request.onRequestError (/home/pm2deploy/apps/app-backend/source/node_modules/request/request.js:881:8)
at ClientRequest.emit (events.js:160:13)
at Socket.socketOnEnd (_http_client.js:423:9)
at Socket.emit (events.js:165:20)
at endReadableNT (_stream_readable.js:1101:12)
at process._tickCallback (internal/process/next_tick.js:152:19)
name: 'RequestError',
message: 'Error: socket hang up',
cause: { Error: socket hang up
at createHangUpError (_http_client.js:330:15)
at Socket.socketOnEnd (_http_client.js:423:23)
at Socket.emit (events.js:165:20)
at endReadableNT (_stream_readable.js:1101:12)
at process._tickCallback (internal/process/next_tick.js:152:19) code: 'ECONNRESET' },
error: { Error: socket hang up
at createHangUpError (_http_client.js:330:15)
at Socket.socketOnEnd (_http_client.js:423:23)
at Socket.emit (events.js:165:20)
at endReadableNT (_stream_readable.js:1101:12)
at process._tickCallback (internal/process/next_tick.js:152:19) code: 'ECONNRESET' },
options:
{ method: 'PUT',
uri: 'http://localhost:5001/transformers/segmentAvg',
qs:
{ stdMultiplier: 2,
segmentLeft: 1509366682333,
segmentRight: 1509367401685 },
body: { index: [Array], values: [Array] },
headers: {},
json: true,
callback: [Function: RP$callback],
transform: undefined,
simple: true,
resolveWithFullResponse: false,
transform2xxOnly: false },
response: undefined }
ADDED:
In the OS log was recorded following entry:
possible SYN flooding on port X. Sending cookies.

Kernel socket backlog reached the limit and dropped following requests.
Reason: Kernel dropping TCP connections due to LISTEN sockets buffer full in Red Hat Enterprise Linux
Increase kernel socket backlog limit
Check the current value:
# sysctl net.core.somaxconn
net.core.somaxconn = 128
Increase the value:
# sysctl -w net.core.somaxconn=2048
net.core.somaxconn = 2048
Confirm the change by viewing again:
# sysctl net.core.somaxconn
net.core.somaxconn = 2048
Persist the change:
echo "net.core.somaxconn = 2048" >> /etc/sysctl.conf
Increase application socket listen backlog
Configuration parameter of uWSGI
listen=1024
This solution was taken from https://access.redhat.com/solutions/30453

Related

Nginx causes file upload to freeze

I've been trying to figure this out for days now.
When I attempt to upload a file to my webserver written in java, about 2.5MB of the file uploads and then it just freezes. Nginx appears to be the culprit because when I upload the file to the webserver directly to the port 1234 using my vps's direct ip instead of the domain the full file uploads perfectly fine.
I am using a program also written in java to upload the file to the webserver and I am getting the error on that:
Exception in thread "main" java.io.IOException: Premature EOF
at sun.net.www.http.ChunkedInputStream.readAheadBlocking(ChunkedInputStream.java:565)
at sun.net.www.http.ChunkedInputStream.readAhead(ChunkedInputStream.java:609)
at sun.net.www.http.ChunkedInputStream.read(ChunkedInputStream.java:696)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3456)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3449)
at java.nio.file.Files.copy(Files.java:2908)
at java.nio.file.Files.copy(Files.java:3027)
at me.hellin.Main.uploadFile(Main.java:28)
at me.hellin.Main.main(Main.java:23)
This is my nginx config for it:
server {
listen 80;
server_name *redacted*;
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
location / {
client_max_body_size 100M;
proxy_pass http://localhost:1234;
}
}
server {
client_max_body_size 100M;
client_body_timeout 120s;
client_body_temp_path /tmp;
}
This is what I see in nginx error.log:
2022/05/03 14:14:41 [error] 2085134#2085134: *326930 connect() to [::1]:1234 failed (101: Network is unreachable) while connecting to upstream, client: *redacted*, server: *redacted*, request: "POST / HTTP/1.1", upstream: "http://[::1]:1234/", host: "*redacted*"
Here's my code just in case I did something wrong here that somehow only affects nginx:
private static InputStream upload(File file) throws Exception {
HttpURLConnection httpURLConnection = (HttpURLConnection) new URL("*redacted*")
.openConnection();
httpURLConnection.setDoOutput(true);
httpURLConnection.setRequestProperty("content-length", String.valueOf(file.length()));
httpURLConnection.setRequestProperty("content-type", "application/java-archive");
httpURLConnection.setRequestMethod("POST");
OutputStream outputStream = httpURLConnection.getOutputStream();
Files.copy(file.toPath(), outputStream);
outputStream.close();
return httpURLConnection.getInputStream();
}
I have finally found the solution to the infuriating issue. Turns out that nginx does some weird shit and I had to change the servers code (receiving the file) to send its response only after the server had closed the output stream. I was sending a response back to the client before and Ig nginx saw that and closed the connection.

nginx / uwsgi-flask is changing port after form submission

I set up a flask/uwsgi and a nginx container.
First of all, I could not access the site having nginx listen on port 80 and defining the docker port 80:80. No idea why this would not work.
So I made nginx listen on port 8090 and opening port 80:8090 in docker:
nginx:
container_name: fa_nginx
build:
context: ./nginx
dockerfile: Dockerfile
restart: unless-stopped
ports:
- 80:8090
networks:
- fa_nginx_net
I am not exposing any ports in the dockerfile, there I copy just the conf file.
## nginx conf
server {
listen 8090;
location / {
include /etc/nginx/uwsgi_params;
uwsgi_pass web:8087;
}
Like that, I can access the site at http://localhost and browser around e.g. to http://localhost/faq.
However, when I submit a form (login), the nginx changes to another port and the redirected url looks like http://localhost:8090/auth/login.
the redirect in flask's login view after successful form validation is simply return redirect(url_for('main.profile')).
Here is the flask/uwsgi setup (shortened version without all then env etc):
web:
container_name: fa_web
build:
context: ./web
dockerfile: Dockerfile
expose:
- "8087"
networks:
- fa_nginx_net
[uwsgi]
plugin = python3
## python file where flask object app is defined.
wsgi-file = run.py
## The flask instance defined in run.py
callable = fapp
enable-threads = false
master = true
processes = 2
threads = 2
## socket on which uwsgi server should listen
protocol=http
socket = :8087
buffer-size=32768
## Gives permission to access the server.
chmod-socker = 660
vacuum = true
## various
die-on-term = true
## run.py
from app import create_app, db
import os
## init flask app
fapp = create_app()
if __name__ == "__main__":
if os.environ.get('FLASK_ENV') == 'development':
fapp.run(use_reloader=True, debug=True, host='0.0.0.0', port=8086)
else:
fapp.run(host='0.0.0.0', port=8087)
I have no idea why this is happening and how to solve this.
Ok, it is working now...
Basically I removed plugin=python3 and protocol=http from uwsgi.ini.
My setup looks now like this
## nginx default.conf
server {
listen 8090;
#server_name fa_web;
## To map the server block to an ip:
#server_name 1.2.3.4;
## To map the server block to a domain:
#server_name example.com www.example.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
include /etc/nginx/uwsgi_params;
uwsgi_pass web:8087;
}
}
## uwsgi.ini
[uwsgi]
## python file where flask object app is defined.
wsgi-file = run.py
## The flask instance defined in myapp.py
callable = fapp
enable-threads = false
master = true
processes = 2
threads = 2
## socket on which uwsgi server should listen
socket = :8087
buffer-size=32768
#post-buffering=8192 ## workaround for consuming POST requests
## Gives permission to access the server.
chmod-socker = 660
vacuum = true
## various
die-on-term = true
The added headers in nginx conf didn't make a difference, just added them going forward configuring everything correctly.

ERROR org.keycloak.adapters.OAuthRequestAuthenticator - failed to turn code into token

We have a problem with our application, Nginx and keycloak. There are 3 instances: instance 1 APP, Instance 2 NGINX (reverse proxy) and Instance 3 Keycloak.
When a user logs in, he creates the session in keycloak but when he returns to the SiAe application it is when he returns 403.
We enter the keycloak administration console and see that the session was successful and is open. But it is not possible that the return to the application works.
Logs
Nginx:
"GET /opensat/?state=dcc1c40f-3183-4c7b-8342-f7df620cf0b3&session_state=605ba79a-ee05-4918-a96d-71466e31210a&code=fe066a17-a97a-495c-94f9-b1e5e3d6ac1f.605ba79a-ee05-4918-a96d-71466e31210a.f91920a4-3267-4de5-9788-24093a32c217 HTTP/1.1" **403 405** "https://mydomain/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0"
APP:
ERROR org.keycloak.adapters.OAuthRequestAuthenticator - failed to turn code into token
java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:934)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
at org.apache.http.conn.ssl.SSLSocketFactory.createLayeredSocket(SSLSocketFactory.java:573)
at org.keycloak.adapters.SniSSLSocketFactory.createLayeredSocket(SniSSLSocketFactory.java:114)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:557)
at org.keycloak.adapters.SniSSLSocketFactory.connectSocket(SniSSLSocketFactory.java:109)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:414)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
Keycloak:
**INFO** [org.keycloak.storage.ldap.LDAPIdentityStoreRegistry] (default task-1) Creating new LDAP Store for the LDAP storage provider: 'ldap_pre', LDAP Configuration: {pagination=[true], fullSyncPeriod=[-1], usersDn=[ou=usuarios,dc=domain,dc=es], connectionPooling=[true], cachePolicy=[DEFAULT], useKerberosForPasswordAuthentication=[false], importEnabled=[true], enabled=[true], changedSyncPeriod=[86400], bindDn=[cn=admin,dc=domain,dc=es], usernameLDAPAttribute=[uid], lastSync=[1575269470], vendor=[other], uuidLDAPAttribute=[entryUUID], connectionUrl=[ldap://MIIP:389], allowKerberosAuthentication=[false], syncRegistrations=[false], authType=[simple], debug=[false], searchScope=[1], useTruststoreSpi=[ldapsOnly], priority=[0], userObjectClasses=[inetOrgPerson, organizationalPerson, person], rdnLDAPAttribute=[cn], editMode=[WRITABLE], validatePasswordPolicy=[false], batchSizeForSync=[1000]}, binaryAttributes: []
Configurations:
Nginx:
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://IP_KEYCLOAK:8081;
}
Keycloak:
....
<server name="default-server">
<http-listener name="default" socket-binding="http" redirect-socket="proxyhttps" proxy-address-forwarding="true" enable-http2="true"/>
......
<socket-binding name="proxy-https" port="443"/>
.....
APP .json:
{
"realm": "domain",
"auth-server-url": "https://domainkeycloak/",
"ssl-required": "none",
"resource": "sso",
"enable-cors" : true,
"credentials": {
"secret": "98236f9a-c8b1-488c-8b36-ace4f95b1aa6"
},
"confidential-port": 0,
"disable-trust-manager":true,
"allow-any-hostname" : true
}
Could anybody help us, please?
The error Connection reset means that the application cannot make request to Keycloak server. You should investigate the issue further by logging in to the application server and telnet keycloak domain with 443 port since you set auth-server-url to https://domainkeycloak/
The solution has been to separate keycloak with other Nginx instead of using one Nginx for APP and Keycloak. Now, we have 2 Nginx's and run OK keycloak with our APP.

Identity Server 4 endpoints ports not filled

Spec:
.net core 2
Identity Server 4
On local dev machine when I visit by Postman http://127.0.0.1:5000/.well-known/openid-configuration i can find for examply "jwks_uri" with address http: //127.0.0.1:5000/.well-known/openid-configuration/jwks
I can visit http: //127.0.0.1:5000/.well-known/openid-configuration/jwks and see result like:
{
"keys": [
{
"kty": "RSA",
"use": "sig",
"kid": "6ca39c3dd4ffda97d502243e25fa4e54",
"e": "AQAB",
"n": "sZthlS0HE1pkbSnMlPyKNDkAqkQryeKG7YSRMeUbrDQARu-9f11iUFUblAdXUhuFRu0R77AQ-mhjy7kfjQMOT58gp3aMa17HTKcMxZRZEi-zcXZuxVA7Q0nuWrWp4_-0VAMV4OhGromZCFtUb26kRJXyKMNlHSM2irSJ9LWnx6NtSkHMrC_kv3kpciZWLx__9DkVM7wmYuGz9DMezoz7-FuwcJcGJHmVz7RNRwGNhdcvEG8nJE3fl8QQ16CjOim2X845gaIc9dWKi1MAA_LS1M2EK4aU8FZjVqgQgY472zrwGtUtwz25aUEZu130fthZabvOiWTDbztuYtOmrxP7BQ",
"alg": "RS256"
}
]
}
Port 5000 is the most important thing
DEV MACHINE SCREENSHOT
On local dev machine when I visit by Postman http://192.168.168.13:81/.well-known/openid-configuration i can find for examply "jwks_uri" with address http: //192.168.168.13/.well-known/openid-configuration/jwks
I cannot visit http://192.168.168.13/.well-known/openid-configuration/jwks beacuse i recieve error 404:
nginx error!
The page you are looking for is not found.
There is no port 81
I can visit http: //192.168.168.13:81/.well-known/openid-configuration/jwks and see result like:
{
"keys": [
{
"kty": "RSA",
"use": "sig",
"kid": "6ca39c3dd4ffda97d502243e25fa4e54",
"e": "AQAB",
"n": "sZthlS0HE1pkbSnMlPyKNDkAqkQryeKG7YSRMeUbrDQARu-9f11iUFUblAdXUhuFRu0R77AQ-mhjy7kfjQMOT58gp3aMa17HTKcMxZRZEi-zcXZuxVA7Q0nuWrWp4_-0VAMV4OhGromZCFtUb26kRJXyKMNlHSM2irSJ9LWnx6NtSkHMrC_kv3kpciZWLx__9DkVM7wmYuGz9DMezoz7-FuwcJcGJHmVz7RNRwGNhdcvEG8nJE3fl8QQ16CjOim2X845gaIc9dWKi1MAA_LS1M2EK4aU8FZjVqgQgY472zrwGtUtwz25aUEZu130fthZabvOiWTDbztuYtOmrxP7BQ",
"alg": "RS256"
}
]
}
SERVER MACHINE SCREENSHOT
This is my Centos firewall settings:
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: ssh dhcpv6-client
ports: 80/tcp 443/tcp 81/tcp 82/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
This is my nginx configuration for reverse proxy:
server {
listen 81;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
As far I have investigated the problem is that in .well-known/openid-configuration there is no proper ports (in that case 81) on server machine.
Endpoints exists, beacuse when I have manually added port 81 to endpoint, endpoint is avaiable.
Beacuse my application relly on .well-known/openid-configuration to autodiscover endpoints my authentication doesn't work. I don't know wether problem is on IdentityServer 4 configuration or on Centos configuration.
If anybody is facing the same issue, the following link will be helpful :)
http://amilspage.com/set-identityserver4-url-behind-loadbalancer/
Especially this part:
app.UseMiddleware<PublicFacingUrlMiddleware>

nginx upstream does not be recognized by proxy_pass

I want my nginx to pass different uri's to different backends,so I thought I do that:
server {
listen 8090;
access_log /var/log/nginx/nginx_access.log combined;
error_log /var/log/nginx/nginx_error.log debug;
location /bar {
proxy_pass http://backend2;
}
location /foo {
proxy_pass http://backend2;
}
location / {
proxy_pass http://backend1;
}
}
upstream backend1 {
server 10.33.12.41:8080;
server 127.0.0.1:8080 max_fails=3;
}
upstream backend2 {
server 10.33.12.41:8080;
server 10.33.12.43:8080;
}
If I call wget http://mynginxserver:8090/ i get the following:
wget http://mynginxserver:8090/
--2015-09-18 11:58:21-- http://mynginxserver:8090/
Connecting to mynginxserver:8090... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://backend1/
[following]
--2015-09-18 11:58:21-- http://backend1/
Resolving backend1 (backend1)... failed: Temporary failure in name resolution.
wget: unable to resolve host address ‘backend1’
Why does it try to resolve backend1? I don't get it. Please help ;)
Regards,
Snooops
My Fault:
1st it should have been postet here: serverfault.com
and 2nd its already solved here:
https://serverfault.com/questions/590044/nginx-proxy-pass-config

Resources