Nginx redirects .well-known endpoints - nginx

I have a web application that runs on WSGI server. The application has OpenID Connect identity provider endpoints, for instance:
/oidc/.well-known/openid-configuration
/oidc/.well-known/simple-web-discovery
/oidc/.well-known/webfinger
Requests to these endpoints are mapped to some functions in my project, which run necessary tasks for each endpoint. I can run my application, and all requests are successfully mapped and handled by the defined functions.
The challenge starts when I host my application on a public IP behind https. For this I use nginx to proxy access to my application. nginx makes my application accessible over a public IP over https. Here is key sections of my nginx config file:
server {
listen 80;
listen [::]:80 default_server;
server_name localhost;
root /home/user/myApp;
include /etc/nginx/default.d/*.conf;
location / {
proxy_pass http://my_app;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-URL-SCHEME https;
}
}
server {
listen 443 ssl;
server_name localhost;
root /home/user/myApp;
ssl_certificate /home/user/cacert.pem;
ssl_certificate_key /home/user/privkey.pem;
include /etc/nginx/default.d/*.conf;
location ~ /\.well-known { allow all; }
location / {
proxy_pass http://my_app;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-URL-SCHEME https;
}
}
Every call is requested/posted correctly, except for the requests to /.well-known/* (actually location ~ /\.well-known { allow all; } in the config is an attempt to solve it), for which I get either 404 or 403 errors.
For instance, one error message in nginx error log reads:
open() "/home/user/myApp/oidc/.well-known/openid-configuration" failed (13: Permission denied), client: X.X.X.X, server: localhost, request: "GET /oidc/.well-known/openid-configuration HTTP/1.1", host: "X.X.X.X"
(IP addresses are masked out)
Few points:
I'm running my application with sudo privileges, so the application has r/w access to all the paths.
Actually, the path /home/user/myApp/oidc/.well-known/openid-configuration does not exist (and thats why I also get 404 error).
/oidc/.well-known/openid-configuration should be mapped to a function (as it happens when I host my application without nginx). So, I don't understand why nginx tries to access a non-existing /oidc/.well-known/* path/file ?!

The problem is the setting location ~ /\.well-known { allow all; }.
This should be removed. Additionally, the setting include /etc/nginx/default.d/*.conf; includes a default config file which also has the setting location ~ /\.well-known { allow all; }. This setting should be removed from that file too.

Related

location "/app" cannot be inside the named location

I want to configure a nginx reverse-proxy server to redirect requests to different servers depending on :
the endpoint
whether it is a plain web request or a websocket upgrade request
I know I can use locations to manage the first point, and named locations to manage the second point, but how can I do both?
server {
listen 80 default_server;
listen [::]:80 default_server;
location /app {
location #web {
proxy_pass http://127.0.0.1:9080/app;
}
location #ws {
proxy_pass http://127.0.0.1:9081;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
}
I get the error message location "/app" cannot be inside the named location "#web"
What am I supposed to do to managed that kind of mixed trafic

nGinx load balancing not working

I've been trying to wrap my head around load balancing over the past few days and have hit somewhat of a snag. I thought that I'd set up everything correctly, but it would appear that I'm getting almost all of my traffic through my primary server still, while the weights I've set should be sending 1:10 to primary.
My current load balancer config:
upstream backend {
least_conn;
server 192.168.x.xx weight=10 max_fails=3 fail_timeout=5s;
server 192.168.x.xy weight=1 max_fails=3 fail_timeout=10s;
}
server {
listen 80;
server_name somesite.somesub.org www.somesite.somesub.org;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host somesite.somesub.org;
proxy_pass http://backend$request_uri;
}
}
server {
listen 443;
server_name somesite.somesub.org www.somesite.somesub.org;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host somesite.somesub.org;
proxy_pass http://backend$request_uri;
}
}
And my current site config is as follows:
server {
listen 192.168.x.xx:80;
server_name somesite.somesub.org;
index index.php index.html;
root /var/www/somesite.somesub.org/html;
access_log /var/www/somesite.somesub.org/logs/access.log;
error_log /var/www/somesite.somesub.org/logs/error.log;
include snippets/php.conf;
include snippets/security.conf;
location / {
#return 301 https://$server_name$request_uri;
}
}
server {
listen 192.168.x.xx:443 ssl http2;
server_name somesite.somesub.org;
index index.php index.html;
root /var/www/somesite.somesub.org/html;
access_log /var/www/somesite.somesub.org/logs/access.log;
error_log /var/www/somesite.somesub.org/logs/error.log;
include snippets/php.conf;
include snippets/security.conf;
include snippets/self-signed-somesite.somesub.org.conf;
}
~
And the other configuration is exactly the same, aside from a different IP address.
A small detail that may or may not matter: One of the nodes is hosted on the same machine of the load balancer - not sure if that matters.
Both machines have correct firewall config, and can be accessed separately.
No error logs are showing anything of use.
The only possible thing I could think of is that the nginx site config is being served before the load balancer; and I'm not sure how to fix that.
With another look at the configuration and realized I could have just as easily had the site config that's on the load balancer listen on 127.0.0.1 and relist that among my available servers in the load balancer.
nGinx config for site on load balancer listening on localhost:80/443 solved this issue.

Nginx/Pyramid custom SSL port

As a prefix, I have been using the following stack for some time with great success:
NGINX - web proxy
SSL - configured in nginx
Pyramid web application, served by gunicorn
The above combo works great, here is a working configuration.
server {
# listen on port 80
listen 80;
server_name portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
server {
# listen on port 80
listen 80;
server_name www.portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
#ssl server
server {
listen 443 ssl;
ssl on;
ssl_certificate /usr/local/etc/letsencrypt/live/portalapi.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/portalapi.example.com/privkey.pem;
server_name www.portalapi.example.com;
client_max_body_size 10M;
client_body_buffer_size 128k;
location ~ /.well-known/acme-challenge/ {
root /usr/local/www/nginx/portalapi;
allow all;
}
location / {
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:8005;
#proxy_intercept_errors on;
allow all;
}
error_page 404 500 502 503 504 /index.html;
location = / {
root /home/luke/ecom2/dist;
}
}
Now, this is how I serve my public facing apps, it works very well. For all my internal applications, I used to simply direct users to an internal domain example: http://subdomain.company.domain , again this worked well for a long time.
Now in the wake of KRACK attack although we have some very thorough firewall rules to prevent a lot of attacks, I want to force all internal traffic through SSL, and I don't want to use a self signed certificate, I want to use lets encrypt so I can auto-renew certificates which makes administration much easier (and cheaper).
In order to use lets encrypt, I need to have a public facing DNS and server to perform the ACME challenge (for auto renewing). Now again this was a very easy thing to setup in nginx, and the below config works perfectly for serving static content:
What it does is if a user from the internet accesses intranet.example.com it simply shows a forbidden message. However, if a local user tries, they get forwarded to intranet.example.com:8002 and the port 8002 is only available locally, so there is no way external users can access a webpage on this site
geo $local_user {
192.168.155.0/24 0;
172.16.10.0/28 1;
172.16.155.0/24 1;
}
server {
listen 80;
server_name intranet.example.com;
client_max_body_size 4M;
client_body_buffer_size 128k;
# Space for lets encrypt to perform challenges
location ~ /\.well-known/ {
root /usr/local/www/nginx/intranet;
}
if ($local_user) {
# If user is local, redirect them to SSL proxy only available locally
return 301 https://intranet.example.com:8002$request_uri;
}
# Default block all non local users see
location / {
root /home/luke/forbidden_html;
index index.html;
}
# This server block is only available to local users inside geo $local_user
# this block listens on an internal port only, so it is never availble to
# external networks
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
root /home/luke/ecom2/dist;
index index.html;
deny all;
}
}
Now, here comes the pyramid/nginx marrying problem, if I use the same above configuration, but have the below settings for my server on 8002:
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:6543;
proxy_intercept_errors on;
deny all;
}
}
I run into all sorts of problems, first off inside pyramid I was using the following code in my views/templates
request.route_url # get route url for desired function
Now using request.route_url with the above settings should cause https://intranet.example.com:8002/login to route tohttps://intranet.example.com:8002/welcome but in reality, this setup would forward a user to: http://intranet.example.com/welcome Again this is not correct.
And if I use route_url with the NGINX proxy setting:
proxy_set_header Host $http_host;
I get the error: NGINX to return a 400 error:
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Then I used the same nginx settings (header $htto), but thought I would change to using:
request.route_path
My theory was this should force everything to stay on the same url prefix, and just forward a user from https://intranet.example.com:8002/login to https://intranet.example.com:8002/welcome but in reality, this setup performed the same way as using route_url.
proxy_set_header Host $http_host;
I then get an error when navigating to https://intranet.example.com:8002
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Can anyone assist with the correct setup in order for me to serve my application on https://intranet.example.com:8002
EDIT:
Have also tried:
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host:$server_port;
proxy_pass http://10.1.1.16:8002;
proxy_intercept_errors on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# root /home/luke/ecom2/dist;
# index index.html;
deny all;
}
Which gives the same result.
I’ve checked a similar configuration and your last example seems correct,
at least for a simplistic gunicorn/pyramid app combination.
Seems something is missing in your puzzle )
Here’s my code (I’m new to Pyramid so something might be done better)
helloworld.py
from pyramid.config import Configurator
from pyramid.renderers import render_to_response
def main(request):
return render_to_response('templates:test.pt', {}, request=request)
with Configurator() as config:
config.add_route('main', '/')
config.add_view(main, route_name='main')
config.include('pyramid_chameleon')
app = config.make_wsgi_app()
templates/test.pt
<html>
<body>
Route url: ${request.route_url('main')}
</body>
</html>
My nginx config
server {
listen 80;
server_name pyramid.lan;
location / {
return 301 https://$server_name:8002$request_uri;
}
}
server {
listen 8002;
server_name pyramid.lan;
ssl on;
ssl_certificate /usr/local/etc/nginx/cert/server.crt;
ssl_certificate_key /usr/local/etc/nginx/cert/server.key;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://127.0.0.1:5678;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This is how I run gunicorn:
gunicorn -w 1 -b 127.0.0.1:5678 helloworld:app
And yes, it works:
$ curl --insecure https://pyramid.lan:8002/
<html>
<body>
Route url: https://pyramid.lan:8002/
</body>
</html>
$ curl -D - http://pyramid.lan
HTTP/1.1 301 Moved Permanently
Server: nginx/1.12.2
Date: Thu, 02 Nov 2017 20:41:50 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://pyramid.lan:8002/
Lets figure out what might go wrong in your case
http 400 usually pops up when you go over httP instead of httpS to a server awaiting httpS requests. If there’s no typo in the post and it indeed occurs when you navigate to https://intranet.example.com:8002 it would be nice to see a curl request showing this and a tcpdump showing what’s happening. Actually you can easily reproduce it by simply typing http://intranet.example.com:8002
another idea is that you’re doing a redirect from your app and the link gets broken when the redirect occurs. I better description on how the user may navigate from https://intranet.example.com:8002/login to .../welcome would be helpful
one more idea is that your app is not that simple and you use some middlewares / customization that makes the default logic work differently and your X-Forwarded-Proto header gets ignored - in this case the behavior would be just as you described
The issue here is, obviously, the missing port within the Location directives that your backend produces.
Now, why is the port missing? Most certainly, because of the following code:
proxy_set_header Host $host;
Note that $host itself does not contain $server_port, unlike $http_host, so, your backend would have no way of knowing which port you meant if you just use $host all by itself.
Note that proxy_redirect default of default expects Location to correspond with the value from proxy_pass in order to do its magic (according to documentation), so, your explicit header setting likely interferes with such logic.
As such, from the nginx point of view, I see multiple possible independent solutions:
remove proxy_set_header Host, and let proxy_redirect do its magic;
set proxy_set_header Host appropriately, to include the port number, e.g., using $host:$server_port or $http_host as you see fit (if that doesn't work, then perhaps the deficiency is actually within your upstream app itself, but fear not -- read below);
provide a custom proxy_redirect setting, e.g., proxy_redirect https://pyramid.lan/ / (equivalent to proxy_redirect https://pyramid.lan/ https://pyramid.lan:8002/), which will ensure that all the Location responses will have the proper port; the only way this wouldn't work is if your upstream does non-HTTP redirects with the missing port.

Nginx: How to forward requests to a port using proxy_pass

I'm just getting started with Nginx and am trying to set up a server block to forward all requests on the subdomain api.mydomain.com to port 8080.
Here's what I've got:
UPDATED:
server {
server_name api.mydomain.com;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
}
}
server {
server_name www.mydomain.com;
return 301 $scheme://mydomain.com$request_uri;
}
server {
server_name mydomain.com;
root /var/www/mydomain.com;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
The server block exists in /etc/nginx/sites-available and I have created a symlink in /etc/nginx/sites-enabled.
What I expect:
I'm running deployd on port 8080. When I go to api.mydomain.com/users I expect to get a JSON response from the deployd API, but I get no response instead.
Also, my 301 redirect for www.mydomain.com is not working. That block was code I copied from Nginx Pitfalls.
What I've tried:
Confirmed that mydomain.com:8080/users and $ curl
http://127.0.0.1:8080/users return the expected response.
Restarted the nginx service after making changes to the server block.
Tried removing the proxy_set_header lines.
Any idea what I'm missing here?
You shouldn't need to explicitly capture the URL for your use case. The following should work for your location block:
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
}
As it turns out, my problem was not with Nginx configuration but rather with my DNS settings. I had to create an A NAME record for each of my sub-domains (www and api). Rookie mistake.
A colleague of mine actually helped me troubleshoot the issue. We discovered the problem when using telnet from my local machine to connect to the server via IP address and saw that Nginx was, in fact, doing what I intended.

Gitlab clone over http fails to authenticate from external network

I have Gitlab 5.2 + Nginx installed on a local machine in my university. Clone over http works for machines that are within the internal network, but trying to clone from a machine on an external network results in an "fatal: Authentication failed" message, even though the exact same credentials are supplied. (I use the same credentials as the ones I use to log in to Gitlab via the web interface)
The Gitlab web interface is accessible from external networks. It is only the clone over http that fails (clone over ssh is not possible because port 22 is blocked)
Here are some lines from the relevant configuration files:
from config/gitlab.yml
host: mydomain
port: 80
https: false
Here are the relevant lines from the ngnix config file
server {
listen *:80 default_server; # In most cases *:80 is a good idea
server_name mydomain; # e.g., server_name source.example.com;
root /home/git/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 2000; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 2000; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
Note: I have added this line to /etc/hosts: 127.0.0.1 mydomain but it doesn't really help. (based on https://github.com/gitlabhq/gitlabhq/issues/3483#issuecomment-15783597)
Any ideas on what the issue might be/how I might debug this?
I believe this is fixed in 5.3, so try updating. See:
https://github.com/gitlabhq/gitlabhq/blob/master/CHANGELOG#L41
https://github.com/gitlabhq/gitlabhq/blob/master/config/gitlab.yml.example#L151

Resources