Reverse Proxy from nginx won't run. Sites are normal - nginx

There's some problem with my nginx. At first, starting is OK, surfing through the proxy is fast enough. But after a while, 5 -> 10 visit later, the proxy become slower and slower. Until it stop working. Even if i try to stop the nginx using "-s stop", double check if there are any nginx.exe running, and restart nginx. It's still not working.
Nginx.exe is still running.
Port is still in used.
I am running on Windows Server 2003 Enterprise Sp2 IIS6
This is the error i read from the log.
2010/08/20 21:14:37 [debug] 1688#3548: posted events 00000000
2010/08/20 21:14:37 [debug] 1688#3548: worker cycle
2010/08/20 21:14:37 [debug] 1688#3548: accept mutex lock failed: 0
2010/08/20 21:14:37 [debug] 1688#3548: select timer: 500
2010/08/20 21:14:37 [debug] 1580#5516: select ready 0
2010/08/20 21:14:37 [debug] 1580#5516: timer delta: 500
2010/08/20 21:14:37 [debug] 1580#5516: posted events 00000000
2010/08/20 21:14:37 [debug] 1580#5516: worker cycle
2010/08/20 21:14:37 [debug] 1580#5516: accept mutex locked
2010/08/20 21:14:37 [debug] 1580#5516: select event: fd:176 wr:0
2010/08/20 21:14:37 [debug] 1580#5516: select timer: 500
2010/08/20 21:14:38 [debug] 1688#3548: select ready 0
2010/08/20 21:14:38 [debug] 1688#3548: timer delta: 500
2010/08/20 21:14:38 [debug] 1688#3548: posted events 00000000
2010/08/20 21:14:38 [debug] 1688#3548: worker cycle
2010/08/20 21:14:38 [debug] 1688#3548: accept mutex lock failed: 0
2010/08/20 21:14:38 [debug] 1688#3548: select timer: 500
And this is the config file i wrote:
#user deploy;
worker_processes 2;
error_log /app/nginx/logs/error.log debug;
events {
worker_connections 64;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
tcp_nodelay on;
gzip on;
gzip_min_length 1100;
gzip_buffers 4 8k;
gzip_types text/plain;
upstream mongrel {
server 127.0.0.1:5000;
server 127.0.0.1:5001;
server 127.0.0.1:5002;
#server 127.0.0.1:5003;
#server 127.0.0.1:5004;
#server 127.0.0.1:5005;
#server 127.0.0.1:5006;
}
server {
listen 81;
server_name site.com;
root C:/app/sub/public;
index index.html index.htm;
try_files $uri/index.html $uri.html $uri #mongrel;
location #mongrel {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://mongrel;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}

Related

NGINX: SSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure-- After enabling FIPS on the web server

I am using NGINX as a reverse proxy for my K2View application on Centos7. It was all working fine untill I tuned the FIPS mode on in the k2view side. Since then I get the following error page when I try to acess my page:
When I check the error log : I see the following error:
2022/12/01 10:42:02 [error] 1572#1572: *3 SSL_do_handshake() failed (SSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure) while SSL handshaking to upstream, client: <ip>, server: <ip>, request: "GET /favicon.ico HTTP/2.0", upstream: "https://<>ip:9443/favicon.ico", host: "<hostname>", referrer: "<URL>"
This is my nginx.conf file:
# Settings for a TLS enabled server.
#
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name <ip>;
root /usr/share/nginx/html;
ssl_certificate "<path>/certificate.pem";
ssl_certificate_key "<path>/key.pem";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1.2 TLSv1.3;
client_max_body_size 5m;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
#proxy_read_timeout 1800;
#proxy_connect_timeout 1800;
#proxy_send_timeout 1800;
#send_timeout 1800;
#Passing hot/port to avoid upstream rewriting it with its own port on redirect
proxy_set_header Host $http_host;
proxy_pass https://<my ip>:9443;
client_max_body_size 20m;
#proxy_ssl_trusted_certificate "<path>/certificate.pem";
#proxy_ssl_verify on;
#proxy_ssl_verify_depth 2;
proxy_ssl_server_name on;
}
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
I have been reading articles and they talk about NGINX Plus and FIPS. Cant we use the NGINX opensource for FIPS or do I have to switch to NGINX PLUS.

grpc_send_timeout doesn't work, Nginx closes GRPC streams unexpectedly

everyone!
I have a config for TLS NGINX server, which proxies stream (bidirectional/unidirectional) to my golang GRPC server. I use params in NGINX conf (server context):
grpc_read_timeout 7d;
grpc_send_timeout 7d;
But! My bidirectional streams close after 60s (send data from server frequently, doesn't send any data from client within 60s), as if grpc_send_timeout is set to default value (60s)
But! If I send echo requests from client every 20s it works fine!
I have no idea why grpc_send_timeout doen't work!
nginx.conf:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log debug;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
resolver 127.0.0.1 valid=10s;
resolver_timeout 10s;
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
}
conf.d/my.service.conf
server {
listen 443 ssl http2;
ssl_certificate my-cert.crt;
ssl_certificate_key my-key.key;
access_log "/var/log/nginx/my.service.access.log" main;
error_log "/var/log/nginx/my.service.error.log" debug;
grpc_set_header x-real-ip $remote_addr;
grpc_set_header x-ray-id $request_id;
grpc_read_timeout 7d;
grpc_send_timeout 7d; // why it does not work?????
location /MyGoPackage.MyService {
grpc_pass grpc://my.service.host:4321;
}
}
nginx logs:
/ # cat /var/log/nginx/my_host_access.log
59.932 192.168.176.1 - - [06/May/2021:14:57:30 +0000] "POST /MyGoPackege.MyService/MyStreamEndpoint HTTP/2.0" 200 1860 "-" "grpc-go/1.29.1" "-"
client logs (with GRPC debug logs)
2021-05-06T17:56:30.609+0300 DEBUG grpc_mobile_client/main.go:39 open connection {"address": "localhost:443"}
INFO: 2021/05/06 17:56:30 parsed scheme: ""
INFO: 2021/05/06 17:56:30 scheme "" not registered, fallback to default scheme
INFO: 2021/05/06 17:56:30 ccResolverWrapper: sending update to cc: {[{localhost:443 <nil> 0 <nil>}] <nil> <nil>}
INFO: 2021/05/06 17:56:30 ClientConn switching balancer to "pick_first"
INFO: 2021/05/06 17:56:30 Channel switches to new LB policy "pick_first"
INFO: 2021/05/06 17:56:30 Subchannel Connectivity change to CONNECTING
INFO: 2021/05/06 17:56:30 Subchannel picks a new address "localhost:443" to connect
INFO: 2021/05/06 17:56:30 pickfirstBalancer: HandleSubConnStateChange: 0xc0004b2d60, {CONNECTING <nil>}
INFO: 2021/05/06 17:56:30 Channel Connectivity change to CONNECTING
INFO: 2021/05/06 17:56:30 Subchannel Connectivity change to READY
INFO: 2021/05/06 17:56:30 pickfirstBalancer: HandleSubConnStateChange: 0xc0004b2d60, {READY <nil>}
INFO: 2021/05/06 17:56:30 Channel Connectivity change to READY
2021-05-06T17:56:30.628+0300 DEBUG main.go:54 open stream {"address": localhost:443"}
2021-05-06T17:56:30.974+0300 INFO main.go:81 new msg from server {"msg": "hello world"}
// some logs within a 60s
2021-05-06T17:57:30.567+0300 FATAL main.go:79 receive new msg from stream {"error": "rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR"}
server logs (only this one at the moment of connect closing, GRPC debug log):
INFO: 2021/05/06 17:57:30 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
client_header_timeout 7d;
client_body_timeout 7d;
adding this params to nginx conf solved the problem

Nginx OSM tiles caching proxy with https upstream

I have the old nginx-based OSM tile caching proxy configured by https://coderwall.com/p/--wgba/nginx-reverse-proxy-cache-for-openstreetmap, but as source tile server migrated to HTTPS this solution is not working anymore: 421-Misdirected Request.
The fix I based on the article https://kimsereyblog.blogspot.com/2018/07/nginx-502-bad-gateway-after-ssl-setup.html. Unfortunately after days of experiments - I'm still getting 502 error.
My theory is that the root cause is the upstream servers SSL certificate which uses wildcard: *.tile.openstreetmap.org but all attempts to use $http_host, $host, proxy_ssl_name, proxy_ssl_session_reuse in different combinations did't help: 421 or 502 every time.
My current nginx config is:
worker_processes auto;
events {
worker_connections 768;
}
http {
access_log /etc/nginx/logs/access_log.log;
error_log /etc/nginx/logs/error_log.log;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache levels=1:2 keys_zone=openstreetmap-backend-cache:8m max_size=500000m inactive=1000d;
proxy_temp_path /etc/nginx/cache/tmp;
proxy_ssl_trusted_certificate /etc/nginx/ca.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_name *.tile.openstreetmap.org;
sendfile on;
upstream openstreetmap_backend {
server a.tile.openstreetmap.org:443;
server b.tile.openstreetmap.org:443;
server c.tile.openstreetmap.org:443;
}
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
include /etc/nginx/mime.types;
root /dist/browser/;
location ~ ^/osm-tiles/(.+) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_FORWARDED_PROTO http;
proxy_set_header Host $http_host;
proxy_cache openstreetmap-backend-cache;
proxy_cache_valid 200 302 365d;
proxy_cache_valid 404 1m;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass https://openstreetmap_backend/$1;
break;
}
}
}
}
But it still produces error when accessing https://example.com/osm-tiles/12/2392/1188.png:
2021/02/28 15:05:47 [error] 23#23: *1 upstream SSL certificate does not match "*.tile.openstreetmap.org" while SSL handshaking to upstream, client: 172.28.0.1, server: example.com, request: "GET /osm-tiles/12/2392/1188.png HTTP/1.0", upstream: "https://151.101.2.217:443/12/2392/1188.png", host: "localhost:3003"
Host OS Ubuntu 20.04 (here https is handled), nginx is runnig on docker from nginx:latest image, ca.crt is the default ubuntu's crt.
Please help.

How to solve 502 Bad Gateway error while deploying app in Heroku

I just wanted to test a simple HTML 1 page webapp in the Heroku cloud. Heroku and Nginx is new to me.
I have used Nginx Buildpack and once the app is deploying, I am getting 502 Bad Gateway error.
I tried to check and the logs and found the below error.
2019-11-04T15:29:27.937072+00:00 app[web.1]: 2019/11/04 15:29:27 [crit] 20#0: *14 connect() to unix:/tmp/nginx.socket failed (2: No such file or directory) while connecting to upstream, client: 10.5.179.3, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://unix:/tmp/nginx.socket:/favicon.ico", host: "someting.herokuapp.com", referrer: "https://someting.herokuapp.com/testpage_text.html"
/tmp is there on the server. Below is the default configuration.
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections <%= ENV['NGINX_WORKER_CONNECTIONS'] || 1024 %>;
}
http {
gzip on;
gzip_comp_level 2;
gzip_min_length 512;
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log <%= ENV['NGINX_ACCESS_LOG_PATH'] || 'logs/nginx/access.log' %> l2met;
error_log <%= ENV['NGINX_ERROR_LOG_PATH'] || 'logs/nginx/error.log' %>;
include mime.types;
default_type application/octet-stream;
sendfile on;
#Must read the body in 5 seconds.
client_body_timeout 5;
upstream app_server {
server unix:/tmp/nginx.socket fail_timeout=0;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
keepalive_timeout 5;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
}
}
}
Do I need to change or add anything to "upstream app_server" in the configuration to make the site work?
Thanks in advance for your help!

Nginx return 501 when uploading large file

When I upload a 10M file to server, Nginx returns a 501 error. But smaller file is uploaded OK.
<html>
<head><title>501 Not Implemented</title></head>
<body bgcolor="white">
<center><h1>501 Not Implemented</h1></center>
<hr><center>nginx/1.8.1</center>
</body>
</html>
access.log
[01/Mar/2017:10:13:29 +0800] "POST /boss/cgi/importemoji HTTP/1.1" 501
582
The Nginx config file is
http {
include mime.types;
#default_type application/octet-stream;
default_type text/plain;
access_log logs/access.log main;
#access_log off;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 30;
keepalive_requests 100;
gzip on;
#gzip_disable msie6;
proxy_max_temp_file_size 0;
proxy_buffer_size 20M;
proxy_buffers 4 20M;
#mail_spam add url, host will be mod
#server_name_in_redirect off;
proxy_connect_timeout 60;
proxy_read_timeout 120;
proxy_send_timeout 120;
client_header_buffer_size 20M;
client_max_body_size 80M;
client_body_buffer_size 60M;
client_body_temp_path /usr/local/qspace/nginx/client_body_temp;
client_header_timeout 1m;
client_body_timeout 1m;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 1024;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-Port $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header PROXY_FORWARDED_FOR "disabled";
server {
listen 80;
listen 443 ssl;
keepalive_timeout 70;
...
}
}
error.log
2017/03/01 10:41:10 [debug] 6167#6167: *132144 __mydebug. menshen cleanup r: 00000000011DD720
2017/03/01 10:41:11 [debug] 6171#6171: *132871 __mydebug_menshen. ngx_http_dummy_payload_handler wait_for_body: yes
2017/03/01 10:41:11 [debug] 6171#6171: *132871 status: unkown. uri: /boss/cgi/importemoji args: r: 00000000011DD720 r->main: 00000000011DD720 r->count: 1
2017/03/01 10:41:11 [debug] 6171#6171: *132871 __mydebug_menshen. ngx_http_menshen_handler is called r: 00000000011DD720 nginx_version: 1008001
2017/03/01 10:41:11 [debug] 6171#6171: *132871 status: send. uri: /boss/cgi/importemoji. args: r: 00000000011DD720 r->main: 00000000011DD720 r->count: 1
2017/03/01 10:41:11 [debug] 6171#6171: *132871 __mydebug_menshen. len: 3078 header:
POST /boss/cgi/importemoji HTTP/1.1
Proxy-Connection:keep-alive
Content-Length:9465320
Pragma:no-cache
Cache-Control:no-cache
Accept:application/json, text/javascript, */*; q=0.01
I try to upload the 10M file using CURL directly to the server, It is Ok. So the problem probably arises from Nginx.
How can I fix the bug?
I have solved the problem. Generally, there exist these points when encountering uploading large file error on Nginx
file size limit. revise client_max_body_size.
keep-alive connections timeout. revise keepalive_timeout
reverse proxy timeout. revise proxy_connect_timeout
and at last you must make sure that Nginx is Native. In this case, My company compile a module called Menshen acted as firewall to Nginx. It only lets pass the file upload request less than 8M.

Resources