I am trying to proxy_pass requests using nginx to a public fqdn.
The server has LB configured only to respond to requests when accessed using fqdn and get an ssl hand shake error when accessed using IP.
My issue is that the nginx is implicitly converting the fqdn to set of IPs and trying them one by one and failing.
Is there a way have nginx proxy_pass without converting the fqdn to IP and route the request to upstream at fqdn?
location /public/api {
proxy_pass https://public.server.com/api;
proxy_set_header Host $host;
}
2022/04/24 23:10:20 [error] 912419#912419: *5 peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream, client: xxxxxxxx, server: _, request: "POST /<api> HTTP/1.1", upstream: "https://<ip1>:443/<api>", host: "<ip>"
2022/04/24 23:10:20 [error] 912419#912419: *5 peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream, client: xxxxxxxx, server: _, request: "POST /<api> HTTP/1.1", upstream: "https://<ip2>43/<api>", host: "<ip>"
2022/04/24 23:10:20 [error] 912419#912419: *5 peer closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream, client: xxxxxxxx, server: _, request: "POST /<api> HTTP/1.1", upstream: "https://<ip3>:443/<api>", host: "<ip>"
Add client certificate and private key to verify nginx and each back-end server. Using proxy_ssl_certificate and proxy_ssl_certificate_key instruction:
location /public/api {
proxy_pass https://public.server.com/api;
proxy_set_header Host $host;
proxy_ssl_certificate /etc/nginx/client.pem;
proxy_ssl_certificate_key /etc/nginx/client.key
}
Related
I would setup an ngrok like self-hosted server. But have some troubles with TCP connections. It works well with https protocol with below Nginx config (it forward my local web server with ssh command):
ssh -R 8888:localhost:5000 abc.xyz
upstream tunnel {
server 127.0.0.1:8888;
}
server {
server_name abc.xyz;
access_log /var/log/nginx/$host;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_pass http://localhost:8888/;
}
error_page 502 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Then I step up with TCP connections with forwarding my vnc server port 5900 with below config:
stream {
log_format dns '$remote_addr - - [$time_local] $protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"';
access_log /var/log/nginx/access.log dns;
error_log /var/log/nginx/error.log;
upstream stream_backend {
server 127.0.0.1:5902;
}
server {
listen 5903;
#TCP traffic will be forwarded to the "stream_backend" upstream group
proxy_pass stream_backend;
}
}
I expect It would forward my local vnc server to internet like we could do with ngrok with ssh command.
ssh -L 5902:127.0.0.1:5900 root#ip
Is there anything wrong this that configs?
Here is the acess log and error on my server after trying connect with port 5903:
Error Log:
2022/02/19 09:32:54 [notice] 35807#35807: signal process started
2022/02/19 09:33:09 [error] 35808#35808: *9 connect() failed (111: Unknown error) while connecting to upstream, client: 14.186.105.235, server: 0.0.0.0:5903, upstream: "127.0.0.1:5902", bytes from/to client:0/0, bytes from/to upstream:0/0
2022/02/19 09:34:05 [error] 35808#35808: *11 connect() failed (111: Unknown error) while connecting to upstream, client: 14.186.105.235, server: 0.0.0.0:5903, upstream: "127.0.0.1:5902", bytes from/to client:0/0, bytes from/to upstream:0/0
Access Log:
14.186.105.235 - - [19/Feb/2022:09:33:09 +0000] TCP 502 0 0 0.000 "127.0.0.1:5902"
14.186.105.235 - - [19/Feb/2022:09:34:05 +0000] TCP 502 0 0 0.000 "127.0.0.1:5902"
I get 502 bad gateway when connecting to my NodeBB installation using my domain
NodeBB is running on default port (4567)
My nginx seems to be configured properly (when connecting using the IP): http://puu.sh/mLI7U/0e03691d4c.png
My nodebb seems to be configured properly (when connecting using the IP):
http://puu.sh/mLI95/5fdafcaed9.png
My A record directing the IP to my VPS is configured properly.
Here is my etc/nginx/conf.d/example.com.conf
server {
listen 80;
server_name sporklounge.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:4567/;
proxy_redirect off;
# Socket.IO Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
My NodeBB config.json
{
"url": "http://localhost:4567",
"secret": "25d0d6a2-0444-49dc-af0c-bd693f5829d8",
"database": "redis",
"redis": {
"host": "127.0.0.1",
"port": "6379",
"password": "",
"database": "0"
}
}
Here is my var/log/nginx/error.log
2016/01/27 12:04:42 [error] 22026#0: *4062 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 50.186.224.26, server: sporklounge.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:80/favicon.ico", host: "sporklounge.com", referrer: "http://sporklounge.com/"
2016/01/27 12:21:06 [crit] 974#0: *1 connect() to 127.0.0.1:4567 failed (13: Permission denied) while connecting to upstream, client: 50.186.224.26, server: sporklounge.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:4567/", host: "sporklounge.com"
2016/01/27 12:21:07 [crit] 974#0: *1 connect() to 127.0.0.1:4567 failed (13: Permission denied) while connecting to upstream, client: 50.186.224.26, server: sporklounge.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:4567/favicon.ico", host: "sporklounge.com", referrer: "http://sporklounge.com/"
All help is greatly appreciated and I will answer all questions that i can to help get a solution, thank you!
The one thing I see is that according to the docs, your url config value should be the full web-accessible address that points to your NodeBB. That would be sporklounge.com, not the current value.
It could also be that the backend is just sometimes responding slowly. Try very high values of this value in Nginx to see if the backend eventually responds:
# For testing, allow very long response times.
proxy_read_timeout 5m;
Also, use netstat to confirm the backend is running on port 4567:
sudo netstat -nlp | grep ':4567'
Wait, the answer may right in your logs, which give you the reason for the connection failure:
(13: Permission denied) while connecting to upstream
See the related question:
(13: Permission denied) while connecting to upstream:[nginx]
I'm running a couple of sails.js backend instances behind an nginx proxy with sticky sessions.
I keep seeing a lot of messages in my nginx error.log regarding sails.js /socket.io/ URLs timing out:
2016/01/04 20:55:15 [error] 12106#12106: *402088 upstream timed out (110: Connection timed out) while reading response header from upstream, client: x.x.x.x, server: example.com, request: "GET /socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=polling&t=1451930055065-4&sid=jvekCYDAcFfu0PLdAAL6 HTTP/1.1", upstream: "http://127.0.0.1:3001/socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=polling&t=1451930055065-4&sid=jvekCYDAcFfu0PLdAAL6", host: "example.com", referrer: "https://example.com/languageExchange/chat/63934"
2016/01/04 20:55:17 [error] 12105#12105: *402482 upstream prematurely closed connection while reading response header from upstream, client: y.y.y.y, server: example.com, request: "GET /socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=QnAe1jiKEHgj-zlKAAKu HTTP/1.1", upstream: "http://127.0.0.1:3001/socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=QnAe1jiKEHgj-zlKAAKu", host: "example.com"
2016/01/04 22:32:33 [error] 12107#12107: *437054 no live upstreams while connecting to upstream, client: z.z.z.z, server: example.com, request: "GET /socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=8G2TfOsNOJMYHZOjAAD3 HTTP/1.1", upstream: "http://sails/socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=8G2TfOsNOJMYHZOjAAD3", host: "example.com"
It doesn't happen for every client, but the number of such messages is significant. And sails.js does not show any relevant errors.
How should I investigate the nature of these issues?
Here's what I've tried so far (and it didn't help):
Upgrade socket.io client to the latest version so far (1.3.7)
Explicitly turn off caching for /socket.io/ requests in nginx
Here's the relevant config files:
sails sockets.js:
adapter: 'socket.io-redis'
nginx:
location ^~ /socket.io/ {
proxy_pass http://sails;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_no_cache true;
proxy_cache_bypass true;
proxy_redirect off;
proxy_intercept_errors off;
}
I have an application that is trying to make ajax requests from a Javascript application to retrieve xml files from an nginx server. Normally everything is fine but I often see errors in the nginx log (and get errors from my applications error reporting) that nginx expriences a timeout during the get:
2015/11/16 21:15:21 [error] 1208#0: *4894044 upstream timed out (110: Connection timed out) while connecting to upstream, client: 209.95.138.54, server: www.servername.com, request: "GET /Shape%20Textures/Metal/Born%20to%20Shine.jpg?agentView=436314 HTTP/1.1", upstream: "http://127.0.0.1:3000/AQO/Shape%20Textures/Metal/Born%20to%20Shine.jpg?agentView=436314", host: "www.servername.com.com", referrer: "https://www.servername.com.com/?nid=39956&mode=edit"
We also sometimes get this similar error:
2015/11/17 19:03:16 [error] 1002#0: *54042 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 137.164.121.52, server: www.servername.com, request: "POST /projects/?q=node/36375/update_project_time_and_stats HTTP/1.1", upstream: "http://127.0.0.1:3000/projects/?q=node/36375/update_project_time_and_stats", host: "www.servername.com", referrer: "https://www.servername.com/AQO/?nid=36375&mode=edit"
I have seem similar posts with similar timeouts but all of those posts seemed reproducible. This issue never happens to me but running on the live server I will see 5-30 of these timeouts a day.
Here is my nginx config:
client_max_body_size 50M;
server {
server_name servername.com;
return 301 $scheme://www.servername.com$request_uri;
}
server {
listen 80;
listen 443 ssl;
fastcgi_read_timeout 120;
ssl_certificate /path/to/ssl/star_servername_com.pem;
ssl_certificate_key /path/to/ssl/star_servername_com.key;
# Redirect all non-SSL traffic to SSL.
if ($ssl_protocol = "") {
rewrite ^ https://$host$request_uri? permanent;
}
root /usr/share/nginx/html;
index index.html index.htm;
# Make site accessible from http://localhost/
server_name www.servername.com;
location / {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass http://127.0.0.1:3000;
proxy_read_timeout 120s;
}
}
Since I can't reproduce it I wonder how might I be able to track down this issue?
upstream timed out (110: Connection timed out) while connecting to
upstream
means that your upstream server didn't accept connection in time
upstream timed out (110: Connection timed out) while reading response
header from upstream
means that your upstream server didn't respond with answer in time
So check your upstream server at 127.0.0.1:3000. It may have setup with small number of incoming connections or some sort of DDoS protection or really heavy loaded at the moment or something else.
I would like to refer to this answer which shows how to optimize your settings.
Although not the best solution, but very dependent on what you'd like to achieve, you could simply increase proxy_read_timeout to for example 300.
Im running mu Nginx as revers proxy to serv content from remote url , it was working fine for while , when i moved it to another host , i start getting the following erros
i tested internet though new host all is fine evern nginx serv from root location without issue but when i request a location that serv as revers proxt am getting
8583#0: *2 upstream timed out (110: Connection timed out) while
connecting to upstream, client: 10.64.159.12, server: xxxxx.com,
request: "GET /web/rest/v1/story/656903.json HTTP/1.1", upstream:
"http://requestedurl.com:80/web/rest/v1/story/656903.json", host: "myurl.com"
Location Config :
location /data {
sub_filter 'http' 'https';
sub_filter_once off;
sub_filter_types application/json;
proxy_read_timeout 300;
proxy_pass http://url here ;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header Accept-Encoding "";
Any advise