grpc_send_timeout doesn't work, Nginx closes GRPC streams unexpectedly - nginx

everyone!
I have a config for TLS NGINX server, which proxies stream (bidirectional/unidirectional) to my golang GRPC server. I use params in NGINX conf (server context):
grpc_read_timeout 7d;
grpc_send_timeout 7d;
But! My bidirectional streams close after 60s (send data from server frequently, doesn't send any data from client within 60s), as if grpc_send_timeout is set to default value (60s)
But! If I send echo requests from client every 20s it works fine!
I have no idea why grpc_send_timeout doen't work!
nginx.conf:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log debug;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
resolver 127.0.0.1 valid=10s;
resolver_timeout 10s;
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
}
conf.d/my.service.conf
server {
listen 443 ssl http2;
ssl_certificate my-cert.crt;
ssl_certificate_key my-key.key;
access_log "/var/log/nginx/my.service.access.log" main;
error_log "/var/log/nginx/my.service.error.log" debug;
grpc_set_header x-real-ip $remote_addr;
grpc_set_header x-ray-id $request_id;
grpc_read_timeout 7d;
grpc_send_timeout 7d; // why it does not work?????
location /MyGoPackage.MyService {
grpc_pass grpc://my.service.host:4321;
}
}
nginx logs:
/ # cat /var/log/nginx/my_host_access.log
59.932 192.168.176.1 - - [06/May/2021:14:57:30 +0000] "POST /MyGoPackege.MyService/MyStreamEndpoint HTTP/2.0" 200 1860 "-" "grpc-go/1.29.1" "-"
client logs (with GRPC debug logs)
2021-05-06T17:56:30.609+0300 DEBUG grpc_mobile_client/main.go:39 open connection {"address": "localhost:443"}
INFO: 2021/05/06 17:56:30 parsed scheme: ""
INFO: 2021/05/06 17:56:30 scheme "" not registered, fallback to default scheme
INFO: 2021/05/06 17:56:30 ccResolverWrapper: sending update to cc: {[{localhost:443 <nil> 0 <nil>}] <nil> <nil>}
INFO: 2021/05/06 17:56:30 ClientConn switching balancer to "pick_first"
INFO: 2021/05/06 17:56:30 Channel switches to new LB policy "pick_first"
INFO: 2021/05/06 17:56:30 Subchannel Connectivity change to CONNECTING
INFO: 2021/05/06 17:56:30 Subchannel picks a new address "localhost:443" to connect
INFO: 2021/05/06 17:56:30 pickfirstBalancer: HandleSubConnStateChange: 0xc0004b2d60, {CONNECTING <nil>}
INFO: 2021/05/06 17:56:30 Channel Connectivity change to CONNECTING
INFO: 2021/05/06 17:56:30 Subchannel Connectivity change to READY
INFO: 2021/05/06 17:56:30 pickfirstBalancer: HandleSubConnStateChange: 0xc0004b2d60, {READY <nil>}
INFO: 2021/05/06 17:56:30 Channel Connectivity change to READY
2021-05-06T17:56:30.628+0300 DEBUG main.go:54 open stream {"address": localhost:443"}
2021-05-06T17:56:30.974+0300 INFO main.go:81 new msg from server {"msg": "hello world"}
// some logs within a 60s
2021-05-06T17:57:30.567+0300 FATAL main.go:79 receive new msg from stream {"error": "rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR"}
server logs (only this one at the moment of connect closing, GRPC debug log):
INFO: 2021/05/06 17:57:30 transport: loopyWriter.run returning. connection error: desc = "transport is closing"

client_header_timeout 7d;
client_body_timeout 7d;
adding this params to nginx conf solved the problem

Related

nginx reverse to ssl backend

i get crt file from a partner, i want to let nginx do ssl connection so i follow this note : https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/ i don't have client pem file and client key file. How can i generate this files with crt to fill this directives:
location /upstream { proxy_pass https://backend.example.com; proxy_ssl_certificate /etc/nginx/client.pem;
proxy_ssl_certificate_key /etc/nginx/client.key;
}
actually my location :
location /api {
access_log /var/log/nginx/api.log upstream_logging ;
proxy_ssl_trusted_certificate /etc/nginx/partner.crt;
# proxy_ssl_certificate_key /etc/nginx/client.key;
# proxy_ssl_certificate /etc/nginx/client.pem;
proxy_ssl_verify off;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_server_name on;
#proxy_ssl_protocols TLSv1 ;
proxy_pass https://api$uri$is_args$args;
}
with this setting i get this error:
SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to upstream
how to get client.key? is this generated from crt file?
You do not need a client key.
Error SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protoco probably indicates that the protocol was not chosen correctly in proxy_ssl_protocols
Official documentation nginx.org proxy_ssl_protocols

Problems with socket.io over https, using nginx as proxy

I have a very weird problem with socket.io and I was hoping someone can help me out.
For some reason few clients cannot connect to the server no matter what, when I am using https.
I am getting the following error code: ERR_CRYPTO_OPERATION_FAILED (see the detailed log below)
Again, most of the time the connection is perfectly fine, only some (random) clients seem to have this problem.
I have created a super simple server.js and client.js to make it easy to test.
I am using socket.io#2.4.1, and socket.io-client#2.4.0
Unfortunately version 3.x.x is not an option.
The OS is Ubuntu 18.04, both on the server, and the client side.
Nginx:
server {
listen 80;
server_name example.domain.com;
return 301 https://example.domain.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.domain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/cert.key;
ssl_protocols TLSv1.2 TLSv1.3;
location /
{
proxy_pass http://127.0.0.1:8000;
include /etc/nginx/proxy_params;
}
location /socket.io {
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 30s;
proxy_read_timeout 30s;
proxy_send_timeout 30s;
proxy_pass http://127.0.0.1:8000/socket.io;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
internal;
}
}
client.js:
const client = io.connect("https://example.domain.com", {
origins: '*:*',
transportOptions: {
polling: {
extraHeaders: {
'Authorization': token
}
}
},
});
Tried adding secure: true, reconnect: true, and rejectUnauthorized : false, but no difference.
Also, I tested it with and without the transportOptions.
server.js:
const port = 5000;
const app = express();
const server = app.listen(port, () => {
console.log(`Listening on port: ${port}`);
});
const io = socket(server);
io.on("connection", (socket) => {
console.log("Client connected", socket.id);
});
Of course, when I remove the redirect in nginx and use plain old http to connect, then everything is fine.
When I run DEBUG=* node client.js, I get the following:
socket.io-client:url parse https://example.domain.com/ +0ms
socket.io-client new io instance for https://example.domain.com/ +0ms
socket.io-client:manager readyState closed +0ms
socket.io-client:manager opening https://example.domain.com/ +1ms
engine.io-client:socket creating transport "polling" +0ms
engine.io-client:polling polling +0ms
engine.io-client:polling-xhr xhr poll +0ms
engine.io-client:polling-xhr xhr open GET: https://example.domain.com/socket.io/?EIO=3&transport=polling&t=NVowV1t&b64=1 +2ms
engine.io-client:polling-xhr xhr data null +2ms
engine.io-client:socket setting transport polling +61ms
socket.io-client:manager connect attempt will timeout after 20000 +66ms
socket.io-client:manager readyState opening +3ms
engine.io-client:socket socket error {"type":"TransportError","description":{"code":"ERR_CRYPTO_OPERATION_FAILED"}} +12ms
socket.io-client:manager connect_error +9ms
socket.io-client:manager cleanup +1ms
socket.io-client:manager will wait 1459ms before reconnect attempt +3ms
engine.io-client:socket socket close with reason: "transport error" +6ms
engine.io-client:polling transport not open - deferring close +74ms
socket.io-client:manager attempting reconnect +1s
...
Searching for ERR_CRYPTO_OPERATION_FAILED error, only leads me to the node.js errors page
which has only the following description:
Added in: v15.0.0
A crypto operation failed for an otherwise unspecified reason.
I am using Let's Encrypt certificate.
I don't get it. If it is an SSL issue, why am I getting this error only for few clients?
Maybe I am missing something in nginx?
Any help is much appreciated.
I've seem similar error with node-apn. My solution was to downgrade to nodejs v14. Maybe give that a try?
two step
the version of node must be 14.x
add this config when connect rejectUnauthorized: false

How to close client connection when Nginx's upstream server close conncetion?

(1) I have an http server, listen port 8080, which sleep 5 second when handle GET request:
class myHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type','text/html')
self.end_headers()
self.wfile.write("Hello World !")
time.sleep(5)
print 'after sleep'
return
try:
server = HTTPServer(('', 8080), myHandler)
server.serve_forever()
except KeyboardInterrupt:
server.socket.close()
(2) I have a nginx, listen 1014 and forward request to 8080,
upstream kubeapiserver {
server 127.0.0.1:8080;
keepalive 30;
}
server {
listen 1014;
server_name 127.0.0.1;
location / {
proxy_pass http://kubeapiserver;
proxy_http_version 1.1;
proxy_connect_timeout 1800s;
proxy_read_timeout 1800s;
proxy_send_timeout 1800s;
}
}
(3) make a long request to 1014
import time
url="http://127.0.0.1:1014"
s = requests.Session()
r = s.get(url)
print r.headers
print r.status_code
print r.content
while True:
time.sleep(10)
AND I FOUND with netstat -nap | egrep "(8080)|(1014)" that:
when the B connection close, the A connection STILL ESTABLISHED
How to close client connection when Nginx's upstream server close conncetion ? How to config?

How do i configure MQTT in nginx

I need to configure MQTT in nginx , I have tried with following code
I have added one .conf file in /usr/local/etc/nginx/mqtt.conf
and mqtt.conf contains:-
log_format mqtt '$remote_addr [$time_local] $protocol $status
$bytes_received '
'$bytes_sent $upstream_addr';
upstream hive_mq {
server 127.0.0.1:18831; #node1
server 127.0.0.1:18832; #node2
server 127.0.0.1:18833; #node3
zone tcp_mem 64k;
}
match mqtt_conn {
# Send CONNECT packet with client ID "nginx health check"
send
\x10\x20\x00\x06\x4d\x51\x49\x73\x64\x70\x03\x02\x00\x3c\x00\x12
\x6e\x67\x69\x6e\x78\x20\x68\x65\x61\x6c\x74\x68\x20\x63\x68\x65
\x63\x6b;
expect \x20\x02\x00\x00; # Entire payload of CONNACK packet
}
server {
listen 1883;
server_name localhost;
proxy_pass hive_mq;
proxy_connect_timeout 1s;
health_check match=mqtt_conn;
access_log /var/log/nginx/mqtt_access.log mqtt;
error_log /var/log/nginx/in.log; # Health check notifications
#access_log /var/log/nginx/access.log;
#error_log /var/log/nginx/error.log; # Health check notifications
}
I have also added few code inside nginx.conf:-
worker_processes 1;
events {
worker_connections 1024;
}
stream {
include stream_conf.d/*.conf;
}
and i have tried to publish message from mosquitto like :
mosquitto_pub -d -h localhost -p 1883 -t "topic/test" -m "test123" -i
"thing001"
but i am not able to connect with nginx, Can any help me how to configure this .
Thanks in advance :)

Reverse Proxy from nginx won't run. Sites are normal

There's some problem with my nginx. At first, starting is OK, surfing through the proxy is fast enough. But after a while, 5 -> 10 visit later, the proxy become slower and slower. Until it stop working. Even if i try to stop the nginx using "-s stop", double check if there are any nginx.exe running, and restart nginx. It's still not working.
Nginx.exe is still running.
Port is still in used.
I am running on Windows Server 2003 Enterprise Sp2 IIS6
This is the error i read from the log.
2010/08/20 21:14:37 [debug] 1688#3548: posted events 00000000
2010/08/20 21:14:37 [debug] 1688#3548: worker cycle
2010/08/20 21:14:37 [debug] 1688#3548: accept mutex lock failed: 0
2010/08/20 21:14:37 [debug] 1688#3548: select timer: 500
2010/08/20 21:14:37 [debug] 1580#5516: select ready 0
2010/08/20 21:14:37 [debug] 1580#5516: timer delta: 500
2010/08/20 21:14:37 [debug] 1580#5516: posted events 00000000
2010/08/20 21:14:37 [debug] 1580#5516: worker cycle
2010/08/20 21:14:37 [debug] 1580#5516: accept mutex locked
2010/08/20 21:14:37 [debug] 1580#5516: select event: fd:176 wr:0
2010/08/20 21:14:37 [debug] 1580#5516: select timer: 500
2010/08/20 21:14:38 [debug] 1688#3548: select ready 0
2010/08/20 21:14:38 [debug] 1688#3548: timer delta: 500
2010/08/20 21:14:38 [debug] 1688#3548: posted events 00000000
2010/08/20 21:14:38 [debug] 1688#3548: worker cycle
2010/08/20 21:14:38 [debug] 1688#3548: accept mutex lock failed: 0
2010/08/20 21:14:38 [debug] 1688#3548: select timer: 500
And this is the config file i wrote:
#user deploy;
worker_processes 2;
error_log /app/nginx/logs/error.log debug;
events {
worker_connections 64;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_types text/plain;
upstream mongrel {
server 127.0.0.1:5000;
server 127.0.0.1:5001;
server 127.0.0.1:5002;
#server 127.0.0.1:5003;
#server 127.0.0.1:5004;
#server 127.0.0.1:5005;
#server 127.0.0.1:5006;
}
server {
listen 81;
server_name site.com;
root C:/app/sub/public;
index index.html index.htm;
try_files $uri/index.html $uri.html $uri #mongrel;
location #mongrel {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://mongrel;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}

Resources