I have the following config:
js_include /etc/nginx/scripts/encode_request.js;
js_set $encoded_request re_encode_url;
log_format logEncoded $encoded_request;
server {
listen 443 ssl;
listen [::]:443;
server_name myfirst-domain.com;
ssl on;
ssl_certificate /etc/ssl/certs/cert.cer;
ssl_certificate_key /etc/ssl/private/cert.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
if ($request_uri ~ ^/lool/https%3A/alf.mydomain.com/(.*)$){
access_log /var/log/nginx/access.log logEncoded; #Output the encoded url to the logs. (For debugging purposes)
rewrite ^/lool/https%3A/alf.mydomain.com/(.*)$ $encoded_request;
}
proxy_pass https://localhost:9980;
}
}
The purpose of which is to filter a URL request that that contains a decoded URL that's required by the backend service. The problem is whilst the request URL has been successfully encoded, it is not being proxied to the backend service and instead I get the original decoded URL which in turn causes an error, though I do get the correctly encoded URL output in the access.log.
Not by far an NGINX or web server saavy person so I'd appreciate some pointers as to what I'm doing wrong / missing.
Another thing that might be of note is that the request upgrades to websocket communication between the client and the sever and I am proxying that.
I'm using NGINX 1.13.6 on Debian Jessie.
I solved the issue using nginScript in the end.
I tossed the conditional and just did everything in nginScript, so the virtual host file (or server block) is simplified thus:
server {
listen 443 ssl;
listen [::]:443;
server_name myfirst-domain.com;
ssl on;
ssl_certificate /path/to/ssl/certificate;
ssl_certificate_key /path/to/ssl/certificate/key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
#Optional
access_log /var/log/nginx/ll_access.log;
error_log /var/log/nginx/ll_error.log;
proxy_pass https://127.0.0.1:9980$encoded_request;
}
}
Related
I'm having this configuration:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name mydomain.com;
ssl_certificate /usr/syno/etc/certificate/ReverseProxy/baac8259-962a-4d45-a265-bf747f5f007d/fullchain.pem;
ssl_certificate_key /usr/syno/etc/certificate/ReverseProxy/baac8259-962a-4d45-a265-bf747f5f007d/privkey.pem;
location / {
proxy_connect_timeout 60;
proxy_read_timeout 60;
proxy_send_timeout 60;
proxy_intercept_errors off;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Proto \"http\";
proxy_set_header X-ProxyScheme \"http\";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://192.168.1.4:8090;
}
}
I want to apply SSL only on the frontend and use basic http in the backend.
This configuration should have worked but the websocket website won't open showing me this error:
WebSocket connection to 'wss://mydomain.com:80/' failed: Error in connection establishment:
net::ERR_SSL_PROTOCOL_ERROR
Digging inside the javascript file of the web application, I see this:
window.websocket = (document.location.protocol == "https:") ? new WebSocket('wss://'+document.location.hostname+":"+window.jsonPort) : new WebSocket('ws://'+document.location.hostname+":"+window.jsonPort);
The thing is that I can really not understand why will the application detect the scheme as https? Since I explicitly try to make it appear as http.
This issue was specific to the application that I was forwarding to. I have now identified and corrected the issue in that application so this is solved.
Okay so I've set up a nginx server that proxies to another 2 servers with load balancing. The only thing now missing are the cookies.
I've been searching numerous forums and questions regarding the rewriting of cookies. Can anyone please give me insight as to how to fix this issue?
The web application deployed to the 2 servers are written with Vaadin.
The 2 servers are running TomEE on port 8080 and 8081 for example.
I'm rewriting through nginx from easy.io to server1:8080 and server2:8080.
Refer to image below: when navigating to server1:8080/myapplication all my cookies are available.
https://ibb.co/X86pvCq
https://ibb.co/0M0GjCt
Refer to image below: when navigating to http://worksvdnui.io/ my cookies are not available.
https://ibb.co/qBkBRqb
I've tried using proxy_cookie_path, proxy_set_header Cookie $http_cookie but to no avail.
Here's the code:
upstream worksvdnuiio {
# ip_hash; sticky sessions!
ip_hash;
# server localhost:8080;
server hades:9090;
server loki:9090;
}
server {
listen 80;
listen [::]:80;
server_name worksvdnui.io;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location /PUSH {
proxy_pass "http://worksvdnuiio/test.qa.gen/PUSH";
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
proxy_ignore_client_abort off;
proxy_read_timeout 84600s;
proxy_send_timeout 84600s;
break;
}
location / {
proxy_pass "http://worksvdnuiio/test.qa.gen/";
proxy_cookie_path /test.qa.gen/ /;
proxy_set_header Cookie $http_cookie;
proxy_pass_request_headers on;
}
}
Any insight would be VALUABLE!
Thanks in advance.
I have a Single Page Application running on a node server serving angular at www.xxx.com. This is currently working.
I am trying to server a second Node application named www.yyy.com however when I set up the NGINX server blocks it is defaulting to the NGINX welcome page.
www.xxx.com NGINX server block (Which is working fine):
server {
listen 80;
listen [::]:80;
server_name xxx.com.au www.xxx.com.au;
return 301 https://xxx.com.au$request_uri;
}
server {
listen 443;
server_name xxx.com.au www.xxx.com.au;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:3000/;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
}
ssl on;
ssl_certificate /etc/letsencrypt/live/xxx.com.au/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxx.com.au/privkey.pem;
}
www.yyy.com Server block: (Currently only serving welcome page)
server {
listen 80;
server_name yyy.com www.yyy.com;
location /site {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:3002/;
proxy_redirect off;
}
}
I have all the DNS set up and the host names set up on my droplet as well. I am using Vultr running Ubuntu if that helps.
I have added both via symbolic link to Sites-available and the line is present in the conf file.
EDIT: As Henry pointed out I was server /site
location /site {
You're serving the app at /site and not /.
You can map different different config blocks to different URLs, so you could e.g. route /example to a different node server if you wanted.
Replacing location /site { with location / { as for your working block will serve your node application at the root. With no configuration for the root node nginx routes it to its default page.
I'm using Nginx as a proxy server in front of an asp.net core application. In my application, I want to read the client request header, specifically the IP and the User-Agent, but I'm getting the Nginx Info instead. I'm using this configuration:
server {
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
listen 80;
location / {
proxy_pass http://172.18.2.3:5000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header User-Agent $http_user_agent;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The application gets the IP Address using this line:
var remoteIpAddress = context.Connection.RemoteIpAddress;
Any idea what could be the problem?
Thanks in Advance.
I have two servers, a proxy server running nginx, and a backend application server
From the outside, everything works as expected.
From the backend, I can access any outside server.
When trying to access the very website from the backend (e.g. wget https://www.my-server-name.com) server, it leads to a timeout.
This is my configuration:
server {
listen 172.25.9.64:80;
server_name www.my-server-name.com;
root /dev/null;
return 301 https://www.my-server-name.com$request_uri;
}
limit_conn_zone $server_name zone=data:10m;
server {
listen 172.25.9.64:443 ssl;
server_name www.my-server-name.com;
root /var/www;
ssl_certificate_key /etc/ssl/server.key;
ssl_certificate /etc/ssl/server.ca-bundle;
location / {
proxy_pass http://172.25.166.68:60936/;
proxy_redirect default;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
include /etc/nginx/proxy.conf;
}
}
Do you have any idea?
Thank you in advance :)
I simply had to add the corresponding IPs to /etc/hosts.