I have rails app on unicorn and nginx in front of it.
# nginx.conf
upstream backend {
server unix:/home/deployer/apps/example.ru/shared/tmp/sockets/unicorn.sock.0 fail_timeout=0;
server unix:/home/deployer/apps/example.ru/shared/tmp/sockets/unicorn.sock.1 fail_timeout=0;
}
log_format default_log '$host $remote_addr [$time_local] "$request" $status $request_length "$http_referer" "$http_user_agent" $request_time';
server {
listen 80;
server_name example.ru www.example.ru dev.example.ru;
access_log /var/log/nginx/example.ru-access.log default_log;
# recursive_error_pages on;
location ~ ^/assets/ {
root /home/deployer/apps/example.ru/current/public;
gzip_static on;
expires 1y;
add_header Cache-Control public;
add_header ETag "";
break;
}
location / {
auth_basic "You shall not pass!";
auth_basic_user_file /home/deployer/.htsandbox;
proxy_set_header HOST $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_pass http://backend;
proxy_redirect off;
}
# Error pages
error_page 413 /413;
if (-f /home/deployer/apps/example.ru/shared/public/system/maintenance.html) {
return 503;
}
error_page 503 #maintenance;
location #maintenance {
if (-f $request_filename) {
break;
}
root /home/deployer/apps/example.ru/shared/public/system;
rewrite ^(.*)$ /maintenance.html break;
}
error_page 502 #bad_gateway;
location #bad_gateway {
if (-f $request_filename) {
break;
}
root /home/deployer/apps/example.ru/shared/public/system;
rewrite ^(.*)$ /bad_gateway.html break;
}
}
When 413 error is raised by nginx, I want pass it to unicorn error page /413. But have 502 Bad Gateway nginx error page.
#log
2013/12/12 12:56:10 [error] 16853#0: *55 client intended to send too large body: 4099547 bytes, client: 128.72.7.207, server: example.ru, request: "POST /users/tester/avatar HTTP/1.1", host: "dev.example.ru", referrer: "http://dev.example.ru/users/tester/avatar"
2013/12/12 12:56:41 [error] 16853#0: *55 upstream prematurely closed connection while reading response header from upstream, client: 128.72.7.207, server: example.ru, request: "POST /users/tester/avatar HTTP/1.1", upstream: "http://unix:/home/deployer/apps/example.ru/shared/tmp/sockets/unicorn.sock.0:/413", host: "dev.example.ru", referrer: "http://dev.example.ru/users/tester/avatar"
2013/12/12 12:57:12 [error] 16853#0: *55 upstream prematurely closed connection while reading response header from upstream, client: 128.72.7.207, server: example.ru, request: "POST /users/tester/avatar HTTP/1.1", upstream: "http://unix:/home/deployer/apps/example.ru/shared/tmp/sockets/unicorn.sock.1:/413", host: "dev.example.ru", referrer: "http://dev.example.ru/users/tester/avatar"
Probably nginx already closed connect, when unicorn try loaded error page
See more
Related
I have the old nginx-based OSM tile caching proxy configured by https://coderwall.com/p/--wgba/nginx-reverse-proxy-cache-for-openstreetmap, but as source tile server migrated to HTTPS this solution is not working anymore: 421-Misdirected Request.
The fix I based on the article https://kimsereyblog.blogspot.com/2018/07/nginx-502-bad-gateway-after-ssl-setup.html. Unfortunately after days of experiments - I'm still getting 502 error.
My theory is that the root cause is the upstream servers SSL certificate which uses wildcard: *.tile.openstreetmap.org but all attempts to use $http_host, $host, proxy_ssl_name, proxy_ssl_session_reuse in different combinations did't help: 421 or 502 every time.
My current nginx config is:
worker_processes auto;
events {
worker_connections 768;
}
http {
access_log /etc/nginx/logs/access_log.log;
error_log /etc/nginx/logs/error_log.log;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache levels=1:2 keys_zone=openstreetmap-backend-cache:8m max_size=500000m inactive=1000d;
proxy_temp_path /etc/nginx/cache/tmp;
proxy_ssl_trusted_certificate /etc/nginx/ca.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_name *.tile.openstreetmap.org;
sendfile on;
upstream openstreetmap_backend {
server a.tile.openstreetmap.org:443;
server b.tile.openstreetmap.org:443;
server c.tile.openstreetmap.org:443;
}
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
include /etc/nginx/mime.types;
root /dist/browser/;
location ~ ^/osm-tiles/(.+) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_FORWARDED_PROTO http;
proxy_set_header Host $http_host;
proxy_cache openstreetmap-backend-cache;
proxy_cache_valid 200 302 365d;
proxy_cache_valid 404 1m;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass https://openstreetmap_backend/$1;
break;
}
}
}
}
But it still produces error when accessing https://example.com/osm-tiles/12/2392/1188.png:
2021/02/28 15:05:47 [error] 23#23: *1 upstream SSL certificate does not match "*.tile.openstreetmap.org" while SSL handshaking to upstream, client: 172.28.0.1, server: example.com, request: "GET /osm-tiles/12/2392/1188.png HTTP/1.0", upstream: "https://151.101.2.217:443/12/2392/1188.png", host: "localhost:3003"
Host OS Ubuntu 20.04 (here https is handled), nginx is runnig on docker from nginx:latest image, ca.crt is the default ubuntu's crt.
Please help.
i have two domains
alpha.mydomain.com and api-alpha.mydomain.com
I am trying to use nginx as a proxy
i am getting the error
Access to XMLHttpRequest at 'https://api-alpha.mydomain.com/dup-check'
from origin 'https://alpha.mydomain.com' has been blocked by CORS
policy: Response to preflight request doesn't pass access control
check: No 'Access-Control-Allow-Origin' header is present on the
requested resource.
i would think based on my setup , the request should not be using api-alpha.mydomain.com but 127.0.0.1 (and not getting the CORS error)
NOTE:: i am using cloudflare https so the console errors are https by cloudflare is the SSL and talking to port 80 to my nginx server
this is part of my nginx config
server {
listen 80;
server_name alpha.mydomain.com ;
access_log /var/log/nginx.access_log main;
root /home/mydomain/react-front/dist;
location / {
try_files $uri $uri/ /index.html;
}
}
server {
listen 80;
server_name api-alpha.mydomain.com ;
access_log /var/log/nginx-api-alpha-access.log main;
location /{
proxy_pass http://127.0.0.1:4001/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
This is the entry from the nginx-api-alpha-access.log
"OPTIONS /dup-check HTTP/1.1" 502 750 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36" "-"
This is the entry from /var/log/nginx/error.log
[error] 1280#1280: *12 connect() failed (111: Connection refused) while connecting to upstream, client: 172.xx.xxx.xxx, server: api-mydomain.trigfig.com, request: "OPTIONS /dup-check HTTP/1.1", upstream: "http://127.0.0.1:4001/dup-check", host: "api-alpha.mydomain.com"
Thanks, not sure what i am missing in my config
try change to
server {
listen 80;
server_name alpha.mydomain.com ;
access_log /var/log/nginx.access_log main;
root /home/mydomain/react-front/dist;
location / {
try_files $uri $uri/ /index.html;
}
}
server {
listen 80;
server_name api-alpha.mydomain.com ;
access_log /var/log/nginx-api-alpha-access.log main;
location /{
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain; charset=utf-8';
add_header 'Content-Length' 0;
return 204;
}
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range,Authorization';
proxy_pass http://127.0.0.1:4001/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_ssl_session_reuse off;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
I have 2 Harbor servers running below nginx server (acting as load balancer and reverse proxy), namely harbor.
load balance nginx config:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
upstream harbor {
ip_hash;
server 10.57.18.120;
server 10.57.18.236;
}
server{
listen 80;
location / {
proxy_pass http://harbor;
}
}
}
nginx config in harbor:
worker_processes auto;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
tcp_nodelay on;
# this is necessary for us to be able to disable request buffering in all cases
proxy_http_version 1.1;
upstream registry {
server registry:5000;
}
upstream ui {
server ui:80;
}
server {
listen 80;
# disable any limits to avoid HTTP 413 for large image uploads
client_max_body_size 0;
location / {
proxy_pass http://ui/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# When setting up Harbor behind other proxy, such as an Nginx instance, remove the below line if the proxy already has similar settings.
# proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
location /v1/ {
return 404;
}
location /v2/ {
proxy_pass http://registry/v2/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# When setting up Harbor behind other proxy, such as an Nginx instance, remove the below line if the proxy already has similar settings.
# proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
location /service/ {
proxy_pass http://ui/service/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# When setting up Harbor behind other proxy, such as an Nginx instance, remove the below line if the proxy already has similar settings.
# proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
}
}
}
When both upstream servers are up, everything is ok, but if one upstream is down, nginx can't route requests to the server. Here are the logs:
2016/11/17 09:05:28 [error] 6#6: *1 connect() failed (113: No route to host) while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://10.57.18.236:80/", host: "10.57.18.236:2000"
2016/11/17 09:05:28 [warn] 6#6: *1 upstream server temporarily disabled while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://10.57.18.236:80/", host: "10.57.18.236:2000"
2016/11/17 09:05:28 [error] 6#6: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://10.57.18.120:80/", host: "10.57.18.236:2000"
2016/11/17 09:05:28 [warn] 6#6: *1 upstream server temporarily disabled while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://10.57.18.120:80/", host: "10.57.18.236:2000"
10.57.2.138 - - [17/Nov/2016:09:05:28 +0000] "GET / HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36" "-"
2016/11/17 09:05:28 [error] 6#6: *1 no live upstreams while connecting to upstream, client: 10.57.2.138, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://apps/favicon.ico", host: "10.57.18.236:2000", referrer: "http://10.57.18.236:2000/"
10.57.2.138 - - [17/Nov/2016:09:05:28 +0000] "GET /favicon.ico HTTP/1.1" 502 575 "http://10.57.18.236:2000/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.98 Safari/537.36" "-"
2016/11/17 09:05:34 [error] 6#6: *7 no live upstreams while connecting to upstream, client: 10.57.2.138, server: , request: "GET / HTTP/1.1", upstream: "http://apps/", host: "10.57.18.236:2000"
10.57.2.138 - - [17/Nov/2016:09:05:34 +0000] "GET / HTTP/1.1" 502 173 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/601.6.17 (KHTML, like Gecko) Version/9.1.1 Safari/601.6.17" "-"
It shows "upstream server temporarily disabled while connecting to upstream" and "no live upstreams while connecting to upstream", when upstream1 is down, but upstream2 is still up.
But I still get the "502 Bad Gateway" if I use domainUrl. At this time, visiting upstream2 (via IP) in browser works fine.
I tried to add "proxy_next_upstream" in http, in server, in the location / block, same problem.
I have a frontend running nginx which proxy requests to a backend running a web service.
I would like to serve a static file if the backend service is down.
Here is the configuration file I am using:
location ~ /api/admin {
rewrite /xxxx/(.+) /$1 break;
error_page 404 502 =200 /themes/yyyy/themes.json;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Origin '*';
add_header Access-Control-Allow-Methods 'GET, POST';
proxy_intercept_errors on;
proxy_pass http://xxxx;
}
location = /themes/yyyy/themes.json {
rewrite /themes/yyyy/themes.json /api/admin/thematics/edito;
}
when I call :
http://url/themes/geoportail/themes.json
I receive a 502 error from nginx, instead of 200 and the static file...
2014/08/25 17:02:35 [error] 13551#0: *6719 connect() failed (111: Connection refused) while connecting to upstream, client: 160.92.103.160, server: uri, request: "GET /themes/yyyy/themes.json HTTP/1.1", upstream: "http://IP:PORT/api/admin/thematics/edito", host: "", referrer: ""
I'm posting a solution I've found, feel free to propose something more elegant.
location ~ /api/admin {
rewrite /xxxx/(.+) /$1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
add_header Access-Control-Allow-Credentials true;
add_header Access-Control-Allow-Origin '*';
add_header Access-Control-Allow-Methods 'GET, POST';
proxy_intercept_errors on;
proxy_pass http://xxxx;
error_page 404 502 503 504 =200 #statictheme;
}
location #statictheme {
try_files $uri /themes/yyyy/themes.json last;
}
location = /themes/yyyy/themes.json {
rewrite /themes/yyyy/themes.json /api/admin/thematics/edito;
}
Static page is served correctly you can visit : http://www.ec2.lankenow.info/
click on the image and it will take you to the error page.
Nginx error log:
2012/03/16 01:58:41 [alert] 884#0: 768 worker_connections are not enough
2012/03/16 01:58:41 [error] 887#0: *3900 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 23.21.141.105, server: www.ec2.lankenow.info, request: "GET /leads HTTP/1.0", upstream: "http://23.21.141.105:80/leads", host: "www.ec2.lankenow.info", referrer: "http://www.ec2.lankenow.info/"
I think I might have to set client_header_buffer_size but dont know how or which file to edit.
Any direction how to go about it would be much appreciated.
EDIT: This can be added to the http section of nginx.conf which can be found in /etc/nginx/
or usr/local/nginx/ depending on your installation
Config file under /etc/nginx/sites-available/
upstream vdiamond {
server 0.0.0.0:3000;
server 0.0.0.0:3001;
server 0.0.0.0:3002;
}
server {
listen 80;
server_name www.ec2.lankenow.info;
access_log /home/ubuntu/vdiamond/log/access.log;
error_log /home/ubuntu/vdiamond/log/error.log;
root /home/ubuntu/vdiamond/public/;
index index.html;
location / {
# Add expires header for static content
location ~* \.(js|css|jpg|jpeg|gif|png)$ {
if (-f $request_filename) {
expires max;
break;
}
}
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_read_timeout 300;
proxy_next_upstream off;
#proxy_redirect false;
if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}
if (-f $request_filename.html) {
rewrite (.*) $1.html break;
}
if (!-f $request_filename) {
proxy_pass http://www.ec2.lankenow.info;
break;
}
}
}