Allow cross origin application side (no control of API) - nginx

I have an application which should be able to read in data from any data source, meaning any API from any domain.
How to get around the Cross-Origin problem when you don't have any control over the API or even the domain it is coming from?
I know that you could simulate the same domain by adding a
location /data/ {
proxy_pass http://exampleAPIdomain.com/data/;
}
block to allow for a specific API domain (here: exampleAPIdomain.com), but in my case I want to be open for any domain.
Is that even possible?

Yes, that is possible by using a variable in the proxy_pass-directive:
proxy_pass $somevariable$request_uri;
You can set the actual host via a header for example, then the directive would be:
proxy_pass $http_someheader$request_uri;
Security note: If you expose this to the internet without some form of authorization, then everybody can use your proxy to proxy anything.

Related

On Demand TLS and Reverse Proxy Support for Custom Domains

I came into a situation today. Please share your expertise 🙏
I have a project (my-app.com) and one of the features is to generate a status page consisting of different endpoints.
Current Workflow
User login into the system
User creates a status page for one of his sites (e.g.google) and adds different endpoints and components to be included on that page.
System generates a link for a given status page.
For Example. my-app.com/status-page/google
But the user may want to see this page in his custom domain.
For Example. status.google.com
Since this is a custom domain, we need on-demand TLS functionality. For this feature, I used Caddy and is working fine. Caddy is running on our subdomain status.myserver.com and user's custom domain status.google.com has a CNAME to our subdomain status.myserver.com
Besides on-demand TLS, I am also required to do reverse proxy as
shown below.
For Example. status.google.com ->(CNAME)-> status.myserver.com ->(REVERSE_PROXY)-> my-app.com/status-page/google
But Caddy supports only protocol, host, and port format for reverse proxy like my-app.com but my requirement is to support reverse proxy for custom page my-app.com/status-page/google. How can I achieve this? Is there a better alternative to Caddy or a workaround with Caddy?
You're right, since you can't use a path in a reverse-proxy upstream URL, you'd have to do rewrite the request to include the path first, before initiating the reverse-proxy.
Additionally, upstream addresses cannot contain paths or query strings, as that would imply simultaneous rewriting the request while proxying, which behavior is not defined or supported. You may use the rewrite directive should you need this.
So you should be able to use an internal caddy rewrite to add the /status-page/google path to every request. Then you can simply use my-app.com as your Caddy reverse-proxy upstream. This could look like this:
https:// {
rewrite * /status-page/google{path}?{query}
reverse_proxy http://my-app.com
}
You can find out more about all possible Caddy reverse_proxy upstream addresses you can use here: https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#upstream-addresses
However, since you probably can't hard-code the name of the status page (/status-page/google) in your Caddyfile, you could set up a script (e.g. at /status-page) which takes a look at the requested URL, looks up the domain (e.g. status.google.com) in your database, and automatically outputs the correct status-page.

Edit a header value in nginx

Background
So I've got a server running a tomcat application hidden behind an Apache proxy. The proxy provides a more user friendly url as well as SSL encryption with automatic redirects so that the app is only accessible on https.
I'm busy migrating this to an nginx proxy.
One of the issues I've had is that upon login, my app sends back a "LocationAfterLogon" header in the http response in the form of
http://192.168.x.x:8080/myapp/index.jsp.
That IP address returned is from the proxied server not visible on the internet. So then the browser gets a connection error trying to navigate to it.
As a workaround, I've used nginx directives:
proxy_hide_header: to hide the LocationAfterLogin header coming back from the proxied server
add_header: to add a new LocationAfterLogin url.
So my config looks as follows
#header for location after logon of demo app
add_header LocationAfterLogon http://example.com/demo/index.jsp;
#hide the real LocationAfterLogon
proxy_hide_header LocationAfterLogon;
The Problem
I need to be able to do a regex replace or similar on LocationAfterLogon because it won't always be to index.jsp, depending on which url was intercepted by the login page.
I am aware that I can also rewrite the tomcat app to send back a relative URL instead, but I'd like to do it all in nginx config.
I've also read about nginx more_set_headers. Haven't tried it yet. Does it allow me to edit the headers?
Apache has the Header edit directive which I was using previously, so I'm looking for something like that.
TL;DR
Is is possible to edit a header location using regex replace or similar in Nginx?
You can use the map directive to rewrite your header:
map $upstream_http_locationafterlogon $new_location {
~regexp new_value;
}
proxy_hide_header LocationAfterLogon;
add_header LocationAfterLogon $new_location;
See the documentation: http://nginx.org/en/docs/http/ngx_http_map_module.html

How to enable simple CORS on nginx

I installed Nginx on my laptop. My web server contains DASH streaming on-demand using the dash.js player which only hosted on localhost. I want to restrict only DASH dataset from localhost that can be used in that player. Can I use CORS for my purpose? I tried adding
location /{
add_header 'Access-Control-Allow-Origin' 'http://localhost';
}
but still any DASH dataset can still use the player which hosted on localhost. How to enable simple CORS features on Nginx? Is my understanding about CORS is wrong?
Thanks
I want to restrict only DASH dataset from localhost that can be used in that player. Can I use CORS for my purpose?
Not really. CORS is used for getting at resources cross-domain. If a player can natively play DASH (which none of the browsers do currently), then the content will play on any page, CORS support or not. The way DASH players work in-browser today is by loading the resources via XHR requests and sending the data with the media source extension API. To do this, the CORS headers are needed.
Cross-origin request blocking isn't really meant to prevent access to a resource. It's to prevent scripts on one page from accessing resources belonging to another page, effectively impersonating a user. Access-Control-Allow-Origin headers enable other pages to access those resources by effectively saying that the resource queried is safe for use.
If you want to actually block access to something, you should use allow/deny. http://nginx.org/en/docs/http/ngx_http_access_module.html

Downsides of 'Access-Control-Allow-Origin: *'?

I have a website with a separate subdomain for static files. I found out that I need to set the Access-Control-Allow-Origin header in order for certain AJAX features to work, specifically fonts. I want to be able to access the static subdomain from localhost for testing as well as from the www subdomain. The simple solution seeems to be Access-Control-Allow-Origin: *. My server uses nginx.
What are the main reasons that you might not want to use a wildcard for Access-Control-Allow-Origin in your response header?
You might not want to use a wildcard when e.g.:
Your web and let’s say its AJAX backend API are running on different domains, or just on different ports and you do not want to expose backend API to whole Internet, then you do not send *. For example your web is on http://www.example.com and backend API on http://api.example.com, then the API would respond with Access-Control-Allow-Origin: http://www.example.com.
If the API wants to request cookies from client it must not send Access-Control-Allow-Origin: *, but its value must be the value of the origin from the actual request.
For testing, actually adding entry in /ets/hosts file for 127.0.0.1/server-public-ip dev.mydomain.com is a decent workaround.
Other way can be to have another domain served by nginx itself like dev.mydomain.com pointing to the same/test-instance of backend servers & static-web-root with some security measures like:
satisfy all;
allow <YOUR-CIDR/IP>;
deny all;
Clarification on: Access-Control-Allow-Origin: *
This setting protects the users of your website from being scammed/hijacked while visiting other evil-websites in a modern-browser which respects this policy (all known browsers should do).
This setting does not protect the webservice from scraper scripts to access your static-assets & APIs at rapid speed - doing bruteforce attacks/bulk downloading/causing load etc.
P.S: (1) For development: you can consider using a free, low-footprint private-p2p vpn-like network b/w your development box & server: https://tailscale.com/
In my opinion, is that you could have other websites consuming your API without your explicit permission.
Imagine you have an e-commerce, another website could do all the transactions using their own look and feel but backed by you, for you, in the end, it is good because you will get the money in the end but your brand will lose its "recognition".
Another problem could be if this website would change the sent payload to your backend doing things like changing the delivery address and other things.
The idea behind is just to not authorize unknown websites to consume your API and show its result to users.
You could use the hosts file to map 127.0.0.1 to your domain name, "dev.mydomain.com", as you do not like to use Access-Control-Allow-Origin: *.

NGinx : How to test if a cookie is set or not without using 'if'?

I am using the following configuration for NGinx currently to test my app :
location / {
# see if the 'id' cookie is set, if yes, pass to that server.
if ($cookie_id){
proxy_pass http://${cookie_id}/$request_uri;
break;
}
# if the cookie isn't set, then send him to somewhere else
proxy_pass http://localhost:99/index.php/setUserCookie;
}
But they say "IFisEvil". Can anyone show me a way how to do the same job without using "if"?
And also, is my usage of "if" is buggy?
There are two reasons why 'if is evil' as far as nginx is concerned. One is that many howtos found on the internet will directly translate htaccess rewrite rules into a series of ifs, when separate servers or locations would be a better choice. Secondly, nginx's if statement doesn't behave the way most people expect it to. It acts more like a nested location, and some settings don't inherit as you would expect. Its behavior is explained here.
That said, checking things like cookies must be done with ifs. Just be sure you read and understand how ifs work (especially regarding directive inheritance) and you should be ok.
You may want to rethink blindly proxying to whatever host is set in the cookie. Perhaps combine the cookie with a map to limit the backends.
EDIT: If you use names instead of ip addresses in the id cookie, you'll also need a resolver defined so nginx can look up the address of the backend. Also, your default proxy_pass will append the request onto the end of the setUserCookie. If you want to proxy to exactly that url, you replace that default proxy_pass with:
rewrite ^ /index.php/setUserCookie break;
proxy_pass http://localhost:99;

Resources