How to close client connection when Nginx's upstream server close conncetion? - nginx

(1) I have an http server, listen port 8080, which sleep 5 second when handle GET request:
class myHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
self.wfile.write("Hello World !")
time.sleep(5)
print 'after sleep'
return
try:
server = HTTPServer(('', 8080), myHandler)
server.serve_forever()
except KeyboardInterrupt:
server.socket.close()
(2) I have a nginx, listen 1014 and forward request to 8080,
upstream kubeapiserver {
server 127.0.0.1:8080;
keepalive 30;
}
server {
listen 1014;
server_name 127.0.0.1;
location / {
proxy_pass http://kubeapiserver;
proxy_http_version 1.1;
proxy_connect_timeout 1800s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
}
(3) make a long request to 1014
import time
url="http://127.0.0.1:1014"
s = requests.Session()
r = s.get(url)
print r.headers
print r.status_code
print r.content
while True:
time.sleep(10)
AND I FOUND with netstat -nap | egrep "(8080)|(1014)" that:
when the B connection close, the A connection STILL ESTABLISHED
How to close client connection when Nginx's upstream server close conncetion ? How to config?

Related

NGINX : no live upstreams while connecting to upstream

I am running multiple filebeats on a server listening on various ports . I have set of udp packets incoming on a server on port 2055 . These packets are routed to upstream filebeat server in round robbin manner . When I directly listen on a single filebeat on port 2055 , filebeat can process around 20k/second without nginx . However I route these packets through nginx the above below error is encountered
udp client: 10.224.3.178, server: 0.0.0.0:2055, upstream: "stream_backend", bytes from/to client:192/0, bytes from/to upstream:0/0
Following is my Nginx stream Block Configuration
`
stream {
log_format proxy '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr" '
'"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';
access_log /var/log/nginx/stream.log proxy;
upstream stream_backend {
# least_conn;
# upstream_connect_time 10 ;
# random two least_time=connect;
zone backend 100k;
server 127.0.0.1:2056;
server 127.0.0.1:2057;
server 127.0.0.1:2058;
server 127.0.0.1:2059;
server 127.0.0.1:2060;
server 127.0.0.1:2061;
server 127.0.0.1:2062;
server 127.0.0.1:2063;
server 127.0.0.1:2064;
server 127.0.0.1:2065;
}
server {
listen 2055 udp;
proxy_pass stream_backend;
proxy_bind $remote_addr transparent;
proxy_buffer_size 10000k;
# upstream_connect_time 10 ;
proxy_timeout 10s;
# proxy_connect_timeout 75s;
proxy_responses 1;
# health_check udp;
}
}
`
Nginx has a bunch of timeout directives. I don't know if I'm missing something important. Any help would be highly appreciated....
I guess proxypass is not taken correctly by Nginx.
proxy_pass should be under location.
Try add under
location ~ {}

Getting NGINX "failed (111: Connection refused) while proxying and reading from upstream, udp"

I have an existing reverse proxying of http/https and I wanted to add raw tcp/udp stream.
Using Module ngx_stream_core_module I added this config:
stream {
upstream app-udp{
server 192.168.165.10:50000 max_fails=0;
}
upstream app-tcp{
server 192.168.165.10:50001 max_fails=0;
}
server {
listen 192.168.134.20:6653 udp;
preread_buffer_size 16k;
preread_timeout 30s;
proxy_protocol_timeout 30s;
proxy_timeout 10m;
proxy_pass "app-udp";
}
server {
listen 192.168.134.20:6654;
preread_buffer_size 16k;
preread_timeout 30s;
proxy_protocol_timeout 30s;
proxy_timeout 10m;
proxy_pass "app-tcp";
}
}
On the backend I set up an echo service just to bounce back the packets as replay.
Testing TCP works great with sending and getting response all the way.
UDP however is failing on the response from the echo service to NGINX.
Wireshark shows:
"ICMP: icmp destination unreachable port unreachable"
The nginx error log shows:
"*352 recv() failed (111: Connection refused) while proxying and
reading from upstream, udp client: 192.168.165.10, server:
192.168.134.20:6653, upstream: "192.168.165.10:50000", bytes from/to client:35/0, bytes from/to upstream:0/35"
I couldn't find any reference online to this type of error from NGINX...
What could be the problem? Have missed some configuration on NGINX of perhaps issue with the network on the backend side? Thank you.
i had exactly the same problem. Then i added proxy_responses 0; to the configuration and the error disappeared.
stream {
upstream app-udp{
server 192.168.165.10:50000 max_fails=0;
}
upstream app-tcp{
server 192.168.165.10:50001 max_fails=0;
}
server {
listen 192.168.134.20:6653 udp;
preread_buffer_size 16k;
preread_timeout 30s;
proxy_protocol_timeout 30s;
proxy_timeout 10m;
proxy_pass "app-udp";
proxy_responses 0;
}
server {
listen 192.168.134.20:6654;
preread_buffer_size 16k;
preread_timeout 30s;
proxy_protocol_timeout 30s;
proxy_timeout 10m;
proxy_pass "app-tcp";
proxy_responses 0;
}
}
Hope this helps

Nginx proxy_pass through 2 servers and custom headers getting "lost"

I am having a problem with my Nginx configuration.
I have an Nginx server(A) that adds custom headers and then that proxy_passes to another server(B) which then proxy passes to my flask app(C) that reads the headers. If I go from A -> C the flask app can read the headers that are set but if I go through B (A -> B -> C) the headers seem to be removed.
Config
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
Flask app running on 127.0.0.1:5000
If I change the server A config to proxy_pass http://127.0.0.1:5000 then the Flask app can see the X-Forwarded-User but if I go through server B the headers are "lost"
I am not sure what I am doing wrong. Any suggestions?
Thanks
I can not reproduce the issue, sending the custom header X-custom-header: custom in my netcat server i get:
nc -l -vvv -p 5000
Listening on [0.0.0.0] (family 0, port 5000)
Connection from localhost 41368 received!
GET / HTTP/1.0
Host: 127.0.0.1:5000
Connection: close
X-Forwarded-User: username
User-Agent: curl/7.58.0
Accept: */*
X-custom-header: custom
(see? the X-custom-header is on the last line)
when i run this curl command:
curl -H "X-custom-header: custom" http://127.0.0.1:4999/
against an nginx server running this exact config:
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
thus i can only assume that the problem is in the part of your config that you isn't showing us. (you said it yourself, it's not the real config you're showing us, but a replica. specifically, a replica that isn't showing the problem)
thus i have voted to close this question as "can not reproduce" - at least i can't.

Problem with nginx + socket + flask. 504 after handshake

many days trying to setup nginx + socketio + flask. After fixing many different problems I got one I can't even find in google ( maybe I'm just too dumb, but still :) ).
After starting all services (uWSGI + Nginx) my app becomes available and everything looks ok. Socketio makes handshake, get response 200. Still ok. After that long polling (xhr) requests start to get 504 error. In nginx error log I see that ping was sent but pong wasn't received...and after that any request starts to get 504...
Please help, I haven't more ideas where I'm wrong...
My settings:
/etc/nginx/sites-avaliable/myproject
server {
listen 80;
server_name mydomen.ru;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/myproject/ftp/files/myproject.sock;
}
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ‘’upgrade’’;
include uwsgi_params;
uwsgi_pass unix:/home/myproject/ftp/files/myproject.sock;
}
}
/etc/systemd/system/myproject.service
[Unit]
Description=myproject description
After=network.target
[Service]
User=myuser
Group=www-data
WorkingDirectory=/home/myproject/ftp/files
Environment=‘’PATH=/home/myproject/ftp/files/venv/bin’’
ExecStart=/home/myproject/ftp/files/venv/bin/uwsgi —ini /home/myproject/ftp/files/uwsgi.ini
[Install]
WantedBy=multi-user.target
/home/myproject/ftp/files/uwsgi.ini
[uwsgi]
module = my_module:application
master = true
gevent = 500
buffer-size = 32768
http-websockets = true
socket = myproject.sock
chmod-socket = 660
vacuum = true
die-on-term = true

nginx close upstream connection after request

I need to keep alive my connection between nginx and upstream nodejs.
Just compiled and installed nginx 1.2.0
my configuration file:
upstream backend {
ip_hash;
server dev:3001;
server dev:3002;
server dev:3003;
server dev:3004;
keepalive 128;
}
server {
listen 9000;
server_name dev;
location / {
proxy_pass http://backend;
error_page 404 = 404.png;
}
}
My programe (dev:3001 - 3004) detect that the connection was closed by nginx after response.
document
The documentation states that for http keepalive, you should also set proxy_http_version 1.1; and proxy_set_header Connection "";

Resources