Currently have Nginx running on the same machine as the rest of my servers, none of which are running IPv6. Relatively frequently, I get hangups when loading content while testing and I find error messages in the error.log file.
My current config:
http {
include mime.types;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
resolver 1.1.1.1 ipv6=off;
#keepalive_timeout 0;
keepalive_timeout 60s;
upstream master_process {
localhost:40088;
}
upstream http_worker {
hash $remote_addr consistent;
localhost:40089;
localhost:40090;
localhost:40091;
localhost:40092;
}
#http server
server {
listen 88;
location / {
lingering_close on;
lingering_time 15s;
lingering_timeout 2s;
proxy_pass http://http_worker;
proxy_http_version 1.1;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
}
location ~ ^/(Main|Monitor|Chart|chartfeed|getchartdata()|Live|Log$) {
proxy_pass http://master_process;
proxy_http_version 1.1;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
}
location ~.*.(gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|txt|js|css|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso|woff|ttf|svg|eot|htm)$ {
proxy_pass http://master_process;
gzip_static on;
expires 7d;
}
}
}
The errors I am currently receiving:
2022/01/28 11:42:27 [error] 23732#17404: *1 connect() failed (10061: No connection could be made because the target machine actively refused it) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /Main?_SID=1*479985359 HTTP/1.1", upstream: "http://[::1]:40088/Main?_SID=1*479985359", host: "localhost:88", referrer: "http://localhost:88/login()"
2022/01/28 11:42:52 [error] 23732#17404: *1 connect() failed (10061: No connection could be made because the target machine actively refused it) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /Main?_SID=1*479985359 HTTP/1.1", upstream: "http://[::1]:40088/Main?_SID=1*479985359", host: "localhost:88", referrer: "http://localhost:88/login()"
Note that I have specified a resolver in the http section so that it can be made global. I have also tried moving that resolver into the server and location sections to no avail.
I have also tried adding {server {listen 88 default_server; listen [::]:88 ipv6only=on; ...}...} which also didn't solve this issue as others have suggested after a quick search online.
Any help would be greatly appreciated!
Related
This is how I configure my Nginx
upstream stage {
server example.com;
}
server {
server_name IP;
listen 80;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header protocol Token;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_pass https://stage;
}
}
I see this on error.log
2021/11/03 15:26:14 [error] 40782#40782: *1 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL hands
haking to upstream, client: IP, server: IP, request: "POST / HTTP/1.1", upstream: "https://IP:80/", host: "IP:10784"
How can I proxy user's request from http to https?
Disabling TLS with the proxy_ssl_verify off directive will resolve the issue, although it, well, disables TLS -- something you should not be doing on a public network connecting the proxying party and the upstream.
Here is the changed configuration:
upstream stage {
server example.com:443;
}
server {
server_name IP;
listen 80;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host example.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header X-Forwarded-Proto: https;
proxy_ssl_verify off;
proxy_pass https://stage$request_uri;
}
}
I have the old nginx-based OSM tile caching proxy configured by https://coderwall.com/p/--wgba/nginx-reverse-proxy-cache-for-openstreetmap, but as source tile server migrated to HTTPS this solution is not working anymore: 421-Misdirected Request.
The fix I based on the article https://kimsereyblog.blogspot.com/2018/07/nginx-502-bad-gateway-after-ssl-setup.html. Unfortunately after days of experiments - I'm still getting 502 error.
My theory is that the root cause is the upstream servers SSL certificate which uses wildcard: *.tile.openstreetmap.org but all attempts to use $http_host, $host, proxy_ssl_name, proxy_ssl_session_reuse in different combinations did't help: 421 or 502 every time.
My current nginx config is:
worker_processes auto;
events {
worker_connections 768;
}
http {
access_log /etc/nginx/logs/access_log.log;
error_log /etc/nginx/logs/error_log.log;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache levels=1:2 keys_zone=openstreetmap-backend-cache:8m max_size=500000m inactive=1000d;
proxy_temp_path /etc/nginx/cache/tmp;
proxy_ssl_trusted_certificate /etc/nginx/ca.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_name *.tile.openstreetmap.org;
sendfile on;
upstream openstreetmap_backend {
server a.tile.openstreetmap.org:443;
server b.tile.openstreetmap.org:443;
server c.tile.openstreetmap.org:443;
}
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
include /etc/nginx/mime.types;
root /dist/browser/;
location ~ ^/osm-tiles/(.+) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_FORWARDED_PROTO http;
proxy_set_header Host $http_host;
proxy_cache openstreetmap-backend-cache;
proxy_cache_valid 200 302 365d;
proxy_cache_valid 404 1m;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass https://openstreetmap_backend/$1;
break;
}
}
}
}
But it still produces error when accessing https://example.com/osm-tiles/12/2392/1188.png:
2021/02/28 15:05:47 [error] 23#23: *1 upstream SSL certificate does not match "*.tile.openstreetmap.org" while SSL handshaking to upstream, client: 172.28.0.1, server: example.com, request: "GET /osm-tiles/12/2392/1188.png HTTP/1.0", upstream: "https://151.101.2.217:443/12/2392/1188.png", host: "localhost:3003"
Host OS Ubuntu 20.04 (here https is handled), nginx is runnig on docker from nginx:latest image, ca.crt is the default ubuntu's crt.
Please help.
I want to deploy a Vapor app on my server to use it as backend for my iOS app.
I'm pretty new to this topic. The only thing I did before was deploying a Django backend on the same server. I rebuild my server to set up the Vapor backend.
To begin, I wanted to deploy a Vapor app as basic as possible.
I followed this tutorial (it's short):
https://medium.com/#ankitank/deploy-a-basic-vapor-app-with-nginx-and-supervisor-1ef303320726
I followed the steps and didn't get errors.
The problem is, when I try to call [IP]/hello like in the tutorial, I get 502 Bad Gateway as answer.
Nginx gives me this error:
connect() failed (111: Connection refused) while connecting to upstream, client: [IP], server: _, request: "GET /hello HTTP/1.1", upstream: "http://127.0.0.1:8080/hello", host: "[IP]"
I hope you can help me with this. :)
Update 1:
I changed the config to this:
server {
listen 80;
listen [::]:80;
server_name [DOMAIN];
error_log /var/log/[DOMAIN]_error.log warn;
access_log /var/log/[DOMAIN]_access.log;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
large_client_header_buffers 8 32k;
location / {
# redirect all traffic to localhost:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_read_timeout 86400;
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
reset_timedout_connection on;
tcp_nodelay on;
client_max_body_size 10m;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml|html|mp4)$ {
access_log off;
expires 30d;
root /home/[AppName]/Public;
}
}
Unfortunately I still get this one:
2019/12/01 14:48:04 [error] 6801#6801: *1 connect() failed (111: Connection refused) while connecting to upstream, client: [IP], server: [DOMAIN], request: "GET /hello HTTP/1.1", upstream: "http://127.0.0.1:8080/hello", host: [DOMAIN]
Update 2:
The error was related to this line:
proxy_pass http://127.0.0.1:8080/;
I had to change it to this:
proxy_pass http://localhost:8080/;
It seems like localhost is not the same.
Now I can run the app via "vapor run" and I can access it. :)
Big thanks to #imike for all the help!!!
You could try my 100% works production config with SSL and websockets support
server {
listen 443;
listen [::]:443;
server_name mydomain.com;
error_log /var/log/mydomain.com_error.log warn;
access_log /var/log/mydomain.com_access.log;
ssl on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /etc/letsencrypt/live/mydomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.com/privkey.pem;
ssl_ciphers 'HIGH:!aNULL:!MD5:!kEDH';
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
ssl_stapling on;
ssl_stapling_verify on;
large_client_header_buffers 8 32k;
location / {
# redirect all traffic to localhost:8080;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8080/;
proxy_redirect off;
proxy_read_timeout 86400;
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# prevents 502 bad gateway error
proxy_buffers 8 32k;
proxy_buffer_size 64k;
reset_timedout_connection on;
tcp_nodelay on;
client_max_body_size 10m;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml|html|mp4)$ {
access_log off;
expires 30d;
root /apps/myApp/Public;
}
}
In the end of config you can see that static files from Public folder nginx will return directly without Vapor app running.
In your config.swift file you should use FileMiddleware only for macOS where you test the app without nginx cause this middleware is really slow, so I suggest you to put it into compiler check
#if os(macOS)
middlewares.use(FileMiddleware.self) // Serves files from `Public/` directory
#endif
The error was related to this line in the config file:
proxy_pass http://127.0.0.1:8080/;
I had to change it to this:
proxy_pass http://localhost:8080/;
It seems like localhost was not the same.
Now I can run the app via "vapor run" and I can access it. :)
Big thanks to #imike for all the help! He solved it!
I have three docker containers in my project: Nginx, tornado-app, and DB. My Tornado app serves WebSocket app (URLs are /clientSocket and /gatewaySocket) and Django app (URLs are everything except WebSocket URLs).I use upstream for serving tornado app (that runs in port 8000) with Nginx. my Project just works fine in last few months with no errors until today that I got strange 504 Errors from Nginx. Here is my Nginx config file:
limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=sms:10m rate=1r/m;
upstream my_server{
server web_instance_1:8000; # tornado app
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name server.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name server.com;
ssl on;
ssl_certificate /etc/nginx/ssl/chained.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
location / {
# limit_req zone=one burst=5;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass https://my_server;
}
location /rest/register/gateway/phone_number {
limit_req zone=sms burst=5;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass https://my_server;
}
location ~ /.well-known {
root /var/www/acme;
allow all;
}
location ~ ^/(admin|main-panel) {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass https://my_server;
}
location /gatewaySocket {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass https://my_server;
}
location /clientSocket {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass https://my_server;
}
}
and here the strange upstream timeout Errors :
2018/06/12 19:23:09 [error] 5#5: *154 upstream timed out (110:Connection timed out) while reading response header from upstream,client: x.x.x.x, server: server.com, request: "GET /admin/main/serverlogs/834591/change/ HTTP/1.1", upstream:"https://172.18.0.3:8000/admin/main/serverlogs/834591/change/",host:"server.com", referrer: "https://server.com/admin/main/serverlogs/"
2018/06/12 19:23:09 [error] 5#5: *145 upstream timed out (110:Connection timed out) while reading response header from upstream,client: x.x.x.x, server: server.com, request: "GET /robots.txtHTTP/1.1", upstream:"https://172.18.0.3:8000/robots.txt",host:"server.com"
2018/06/12 19:40:51 [error] 5#5: *420 upstream timed out (110:Connection timed out) while SSL handshaking to upstream, client:x.x.x.x, server: server.com, request: "GET /gatewaySocket HTTP/1.1",upstream: "https://172.18.0.3:8000/gatewaySocket",host:"server.com:443"
I've set up an Elasticsearch server with Kibana to gather some logs.
Elasticsearch is behind a reverse proxy by Nginx, here is the conf :
server {
listen 8080;
server_name myserver.com;
error_log /var/log/nginx/elasticsearch.proxy.error.log;
access_log off;
location / {
# Deny Nodes Shutdown API
if ($request_filename ~ "_shutdown") {
return 403;
break;
}
# Pass requests to ElasticSearch
proxy_pass http://localhost:9200;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# For CORS Ajax
proxy_pass_header Access-Control-Allow-Origin;
proxy_pass_header Access-Control-Allow-Methods;
proxy_hide_header Access-Control-Allow-Headers;
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type';
add_header Access-Control-Allow-Credentials true;
}
}
Everything works well, I can curl -XGET "myserver.com:8080" to check, and my logs come in.
But every minute or so, in the nginx error logs, I get that :
2014/05/28 12:55:45 [error] 27007#0: *396 connect() failed (111: Connection refused) while connecting to upstream, client: [REDACTED_IP], server: myserver.com, request: "POST /_bulk?replication=sync HTTP/1.1", upstream: "http://[::1]:9200/_bulk?replication=sync", host: "myserver.com"
I can't figure out what it is, is there any problem in the conf that would prevent some _bulk requests to come through ?
Seems like upstream and a different keepalive is necessary for the ES backend to work properly, I finally had it working using the following configuration :
upstream elasticsearch {
server 127.0.0.1:9200;
keepalive 64;
}
server {
listen 8080;
server_name myserver.com;
error_log /var/log/nginx/elasticsearch.proxy.error.log;
access_log off;
location / {
# Deny Nodes Shutdown API
if ($request_filename ~ "_shutdown") {
return 403;
break;
}
# Pass requests to ElasticSearch
proxy_pass http://elasticsearch;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
# For CORS Ajax
proxy_pass_header Access-Control-Allow-Origin;
proxy_pass_header Access-Control-Allow-Methods;
proxy_hide_header Access-Control-Allow-Headers;
add_header Access-Control-Allow-Headers 'X-Requested-With, Content-Type';
add_header Access-Control-Allow-Credentials true;
}
}