Restlet how to build relates HATEAOS links properly? - nginx

Building a webapp behind a reverse proxy/load balancer, I need to get the correct original URL of the request (pre load balancer rewrite).
I have used getReference() (in the ServerResource) to add a self reference in the HATEAOS sense. However the doc says that the getReference() can be manipulated by the routing, and currently it does not include the correct scheme (http, instead of https - the load balancer terminates the https).
Here are the NGINX configs with regards to the headers forwarded.
location /api {
proxy_pass http://test-service;
proxy_pass_header X-Host;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_FORWARDED_PROTO $scheme;
}
Is the reverse proxy config incorrect, or should I use the getOriginalReference() method. Is there some documentation that explains how the "original" reference is constructed, which fields are used behind a revers proxy.

I think that the support of the X-Forwarded-For header must be explicitly enabled in Restlet due to potential security issues.
Here is the way to enable this feature as the server connector level:
Component c = new Component();
Server server = c.getServers().add(Protocol.HTTP, 8182);
server.getContext().getParameters().add("useForwardedForHeader", "true");
c.start();
See this page for more details: http://restlet.com/technical-resources/restlet-framework/guide/2.3/core/base/connectors.
Once done, the corresponding hints are available in the ClientInfo object:
List<String> forwardedAddresses
= request.getClientInfo().getForwardedAddresses();
See this page for the mapping between headers and Restlet API: http://restlet.com/technical-resources/restlet-framework/guide/2.2/core/http-headers-mapping.
Hope this helps you,
Thierry

Related

Nginx Custom Header as HTTP request response

I'm facing a challenge and need some help, please.emphasized text
It's quite simple, I need to set a custom header as a response from an http request to an internal app I have running on the same instance.
I have two applications running on docker in one instance.
I need to set a custom header on the APP 1 and the value of this header is an api call to APP 2.
NGINX config APP 1:
server {
server_name example.com;
location / {
proxy_pass http://localhost:6987;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header user-code 0.0.0.0:6500?code=$geoip2Lite_data_city_name;
}
APP 2 it's a simple app that has an end-point that returns a value accordingly to the code query parameter.
This app is running on docker on the same instance.
What I'm looking for, is that all requests that comes to APP 1 have the header user-code with the actual response of the Api call.
Example:
If a user access my app from Lisbon an GET request to http://0.0.0.0:9982?code=Lisbon will be made the response of of this request is 236578552 thus the header user-code will be 236578552
Is that possible to be done in NGINX?
Thank you all in advance, Cheers!

Handling flask url_for behind nginx reverse proxy

I have a flask application using nginx for a reverse proxy/ssl termination, but I'm running into trouble when using url_for and redirect in flask.
nginx.conf entry:
location /flaskapp {
proxy_pass http://myapp:8080/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
The idea is that a user navigates to
https://localhost:port/flaskapp/some/location/here
and that should be passed to flask as
http://localhost:8080/some/location/here
This works reasonably well when navigating to a defined route, however if the route has redirect(url_for('another_page')), the browser is directed to
http://localhost:8080/another_page
And fails, when the URL I actually want to go to is:
https://localhost:port/flaskapp/another_page
I have tried several other answers for similar situations, but none have seemed to be doing exactly what I am doing here. I have tried using _external=True, setting app.config['APPLICATION_ROOT'] = '/flaskapp' and many iterations of different proxy_set_header commands in nginx.conf with no luck.
As an added complication, my flask application is using flask-login and CSRF cookies. When I tried setting APPLICATION_ROOT the application stopped considering the CSRF cookie set by flask-login valid, which I assume has something to do with origins.
So my question is, how do I make it so that when flask is returning a redirect() to the client, nginx understands that the URL it is given needs flaskapp written into it?
I managed to fix it with some changes.
Change 1. Adding /flaskapp to the routes in my flask application. This eliminated the need for URL-rewriting and simplified things greatly.
Change 2. nginx.conf changes. I added logc in the location block to redirect http requests as https, new conf:
location /flaskapp {
proxy_pass http://myapp:8080/;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# New configs below
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
# Makes flask redirects use https, not http.
proxy_redirect http://$http_host/ https://$http_host/;
}
While I didn't "solve" the issue of introducing conditional rewrites based on a known prefix, since I only need one prefix for this app it is an acceptable solution to bake it into the routes.
In your situation I think the correct thing would be to use werkzeug's ProxyFix middleware, and have your nginx proxy set the appropriate required headers (specifically X-Forwarded-Prefix).
https://werkzeug.palletsprojects.com/en/0.15.x/middleware/proxy_fix/#module-werkzeug.middleware.proxy_fix
This should make url_for work as you would expect.
Edit: Snippet from #Michael P's answer
from werkzeug.middleware.proxy_fix import ProxyFix
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_host=1)

Nginx reverse proxy root URI issue

I have an application running in Kubernetes with the following topology:
Some-ingress-controller--> nginx reverse proxy -->dynamically generated services.
I have set the NGINX reverse proxy with the following test configuration
location /mysite1/ {
proxy_set_header Host $host;
proxy_set_header Referer $http_referer;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $remote_addr;
proxy_pass http://myservice1.default.svc:9000/;
}
So far everything works fine - when I go to my website http://example.com/mysite1/ I see what I expect from the myservice1 application hosted at http://myservice1.default.svc:9000/. However, the application myservice1 issues requests to various internal (internal meaning they are part of the same container) resources on /get_resourceX. When the myservice1 application tries to access these resources they will be accessed at http://example.com/get_resourceX/ and not at http://example.com/mysite1/get_resourceX as they should - and that is my problem.
What could work is to simply reverse proxy all the relevant resource names as well. However, then I would need to do the same for http://example.com/mysite2, http://example.com/mysite3 etc. which is impractical since these are generated dynamically.
Another possible solution is to check the http Referrer header and see whether it originates from mysite1 - but that seems awfully hackish.
How can I easily have myservice1 requests issued to /get_resourceX served by itself? Is there a generic way to set the root path for the myservice1 application to myservice1?

Asp.Net Core Google authentication

My app runs on Google Compute Engine. Nginx used as a proxy server. Nginx was configured to use SSL. Below is the content of /etc/nginx/sites-available/default:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name mywebapp.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
include snippets/ssl-mywebapp.com.conf;
include snippets/ssl-params.conf;
root /home/me/MyWebApp/wwwroot;
location /.well-known/ {
}
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
In Startup.cs I have:
app.UseGoogleAuthentication(new GoogleOptions()
{
ClientId = Configuration["Authentication:Google:ClientId"],
ClientSecret = Configuration["Authentication:Google:ClientSecret"],
});
Now in Google Cloud Platform I need to specify Authorized redirect URIs. If I enter the following, my web app works as expected:
http://mywebapp.com/signin-google
But, it won't work if https is used; browser displays the following error:
The redirect URI in the request, http://mywebapp.com/signin-google, does
not match the ones authorized for the OAuth client.
In this case, is it safe to use http as authorized redirect uri? What configuration do I need if I want it to be https?
This happens because your application which is running behind a reverse proxy server doesn't have any idea that originally request came over HTTPS.
SSL/TLS Termination Proxy
The configuration of the reverse proxy described in the question is called SSL/TLS Termination reverse proxy. That means that secure traffic is established between a client and a proxy server. The proxy server decrypts a request and then forwards it to an application over HTTP protocol.
The issue with this configuration is that an application behind it is not aware that client sent request over HTTPS. So when it comes to redirect to itself it uses HttpContext.Request.Scheme, HttpContext.Request.Host and HttpContext.Request.Port to build a valid URL for redirect.
X-Forwarded-* HTTP Headers
This is where X-Forwarded-* headers come into play. To let the application know that request is originally coming through a proxy server over HTTPS we have to configure the proxy server to set X-Forwarded-For and X-Forwarded-Proto HTTP headers.
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
OK, now if we get back to ASP.NET Core application and take a look at incoming HTTP request we will see both X-Forwarded-* headers set, however a redirect URL still uses HTTP scheme.
Forwarded Headers Middleware
Basically this middleware overrides HttpContext.Request.Scheme and HttpContext.Connection.RemoteIpAddress to values which were provided by X-Forwarded-Proto and X-Forwarded-For headers appropriately. To make it happen let's add it to pipeline by adding the following line somewhere in the beginning of the Startup.Configure() method.
var forwardedHeadersOptions = new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto,
RequireHeaderSymmetry = false
};
forwardedHeadersOptions.KnownNetworks.Clear();
forwardedHeadersOptions.KnownProxies.Clear();
app.UseForwardedHeaders(forwardedHeadersOptions);
This should eventually make your application construct valid URLs with HTTPS scheme.
My Story
The code above looks different to what Microsoft suggests. If we take a look in documentation their code looks a bit shorter:
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});
However this didn't work for me. Also according to the comments under this issue I'm not alone.
I have a nginx set up as reverse proxy for ASP.NET Core application running in Docker container. It became more complicated after I put everything behind Amazon Load Balancer (ELB).
I followed advice from the documentation first, but it didn't work for me. I have got the following warning in my app:
Parameter count mismatch between X-Forwarded-For and X-Forwarded-Proto
Then I looked at my X-Forwarded-* headers and realized that they had different length. X-Forwarded-For header was containing 2 records (comma separated IP addresses), while X-Forwarded-Proto only one record https. This is how I came up to setting the property RequireHeaderSymmetry to false.
Well, I got rid of 'Parameter count...' warning message, but immediately after that I faced another odd debug message:
Unknown proxy: 172.17.0.6:44624
After looking into the source code of ForwardedHeadersMiddleware I have finally figured out that I have to either clean up both KnownNetworks and KnownProxies collections of the ForwardedHeadersOptions or add my docker network 172.17.0.1/16 to the list of known networks. Right after that I have finally got it working.
PS: For those who sets up a SSL/TLS termination on load balancer (e.g. Amazon Load Balancer or ELB) DON'T set header X-Forwarded-Proto in nginx configuration. This will override correct https value which came from load balance to the http scheme and redirect url will be wrong. I have not found yet how to just append scheme used in nginx to the header instead of overriding it.
For apache users, need to add only one header:
RequestHeader set X-Forwarded-Proto "https"
Firstly need to be sure that mod_headers is enabled.

Vaadin, Nginx. unsaved data

See image below of vaadin 7, nginx. What could be wrong?
web.xml
sample config:
server {
listen 80;
server_name crm.komrus.com;
root /home/deploy/apache-tomcat-7.0.57/webapps/komruscrm;
proxy_cache one;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080/komruscrm/;
}
}
As it seems (because you don't provide enough info about your problem) you are using nginx as reverse proxy for tomcat/jboss/jetty, and you are deploying a Vaadin application in it.
Just when you enter in the application, session expired message appears.
I had this problem 3 months ago. In my escenario Nginx was 1.0 and Vaadin 7.0+. The issue comes because of the cookies. I know that nginx must set or rewrite something in the cookies, but, you must set it manually in nginx.conf file, else, you will get that error.
Sadly, in my nginx version I wasn't able to pass cookies in the right way, so, I wasn't able to deploy my application under that scenario.
After some issues, I've decided to use Apache's reverse proxy, and never saw that issue again. Hope you can write a rule that enables to pass the cookies in the right way.
EDIT: I remembered this post How to rewrite the domain part of Set-Cookie in a nginx reverse proxy?, this is the case!

Resources