Nginx confg issue - couldn't connect to S3 compatible storage from NodeJS test program - nginx

This is my first quetion on StackOverflow.
I have a requirement to provision LB, and proxy layer in DMZ for the clients to reach a backend S3 compatible storage to read buckets. I am using multiple instances of Nginx for this: one instance for LB (node 1), and two instances (node 2,3) as reverse proxy. LB (node 1) listens on https 443 and has a CA signed cert, and is visible on internet. Node 2,3 are listening on http 80, and will fwd requests to backend S3 compatible storage listens on https with self-signed certs.
When I use a test NodeJS program, from with in DMZ layer, to directly connect to the S3 compatible storage, I could read and list buckets using AWS client, with accessKeyId and secretAccessKey.
But when I use the same test NodeJS program, from internet, with same accessKeyId and secretAccessKey, and trying to connect to node 1 (eventually reach backend S3 compatile storage), I am getting the following error:
{"message":"The request signature we calculated does not match the signature you provided.
Check your AWS Secret Access Key and signing method.
For more information, see REST Authentication and SOAP Authentication for details.",
"code":"SignatureDoesNotMatch",
"region":null,
"time":"2018-12-18T12:34:28.313Z",
"requestId":"2899219037",
"statusCode":403,"retryable":false,
"retryDelay":14.04655267301651}
I tried multiple ways to understand and solve this. It looks like my Nginx config is not passing http headers correctly. But I didn't explicitly config anything to hide http headers, and my understanding is that, all headers will pass through unless we explicitly block them.
Other than reaching to S3 compatible backend storage, the calls are going through Nginx. I have even tested to reach another host instead of S3 compatible storage (with self-signed cert), it worked well.
Pl suggest any solution. Let me know any info I may need to add to this question.
srinivas

Resolved.
In my case what fixed the issue is setting up Host as header.
location /something {
...
proxy_set_header Host $http_host;
...
}
My understanding is, Host is used as part of signature generation / verification. It is stripped by Nginx by default, and setting this up explicitly resolved it.

Related

Need Certificate chain (on the incoming interface) from Nginx

I am using a setup wherein a chain certificate(Root CA Cert-> Intermediate CA Cert -> Client Cert) is being sent to the Nginx. I need to configure Nginx in such a way that it forwards the entire certificate chain to the middleware. Right now, it is just sending the leaf certificate i.e. client certificate.
I found the following options from the Nginx's documentation (http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate)
1- $ssl_client_escaped_cert
2- $ssl_client_cert
None of the above returns the full certificate chain.
Is anyone aware if there is such an option available ?
This seems to be impossible by design - see https://serverfault.com/questions/576965/nginx-proxy-pass-with-a-backend-requesting-client-certificates
The usage of $ssl_client_escaped_cert (as explained in https://clairekeum.wordpress.com/2018/12/05/passing-client-cert-through-nginx-to-the-backend/) seems to be your only option.
This may not be a complete answer, but thought I'd post some resources that may give you a couple of ideas.
If you want the client cert details downstream, then one option is to avoid terminating Mutual TLS in nginx by using the stream module. Here is an example:
Mutual TLS Secured API Blog Post .
NGINX Config
In this setup there are 2 Mutual TLS connections being routed via nginx:
To authenticate with an Authorization Server - where Mutual TLS is not handled by nginx
To call an API with a certificate bound access token - where nginx terminates TLS
Note that this uses a LUA plugin and the ssl_client_raw_cert property to do the extra work of calculating a SHA256 thumbprint, which NGINX itself does not support.
Generally though it makes sense to externalise Mutual TLS plumbing from application level components, as in the above example. Eg you can forward ssl_client_eacaped_cert to your middleware, but perhaps nginx should do the more detailed work of checking issuers.

How to configure nginx to only allow requests from cloudfront client?

I have an server behind nginx, and I have a frontend distributed on AWS cloudfront using AWS Amplify. I'd like requests coming not from my client to be denied at the reverse proxy level. If others think I should do this on the app level, please lmk.
What I've tried so far is to allow all the ips of AWS' cloudfront system (https://ip-ranges.amazonaws.com/ip-ranges.json), and deny all after that. However, my requests from the correct client get blocked.
My other alternative is to do a lookup by IP of the domain for every request, and check against that - but I'd rather not do a DNS lookup every time.
I can also include some kind of token with every request, but come on - there's gotta be some easier way to get this done.
Any ideas?

How to use Python requests to connect to a server through proxy when both requires different client certificate

I want to connect to a https server using python requests library through a proxy. The code roughly looks like
response = requests.get(SERVER_ENDPOINT, proxies=PROXIES, cert=??)
My problem is, both server and proxy requires client authentication, and unfortunately different CA is used to authenticate server and proxy. Is there a way to pass two CAs when making a request? The documentation doesn't seem to be very clear on this scenario.
Any help is greatly appreciated:)
Method Tried:
Tried the method as suggested in another link Python requests - how to add multiple own certificates, and bundle certs and keys into separate pem files using the code below:
response = requests.get(SERVER_ENDPOINT, proxies=PROXIES, cert=(CERT_BUNDLE, KEY_BUNDLE))
It seems that only the 1st cert and key is used, so I am able to pass client auth at proxy server, but failed auth at destination server.

Forward HTTPS client ip from Google Container Engine

I'm running an nginx service in a docker container with Google Container Engine which forwards specific domain names to other services, like API, Frontend, etc. I have simple cluster for that with configured services. Nginx Service is Load Balance.
The REMOTE_ADDR environmental variable always contains an internal address in the Kubernetes cluster. I looked for is HTTP_X_FORWARDED_FOR but it's missing from the request headers. Is it possible to configure the service to save the external client ip in the requests?
With the current implementation of L3 balancing (as of Kubernetes 1.4) it isn't possible to get the source IP address for a connection to your service.
It sounds like your use case might be well served by using an Ingress object (or by manually creating an HTTP/S load balancer), which will put the source IP address into a the X-Forwarded-For HTTP header for easy retrieval by your backends.

Real life usage of the X-Forwarded-Host header?

I've found some interesting reading on the X-Forwarded-* headers, including the Reverse Proxy Request Headers section in the Apache documentation, as well as the Wikipedia article on X-Forwarded-For.
I understand that:
X-Forwarded-For gives the address of the client which connected to the proxy
X-Forwarded-Port gives the port the client connected to on the proxy (e.g. 80 or 443)
X-Forwarded-Proto gives the protocol the client used to connect to the proxy (http or https)
X-Forwarded-Host gives the content of the Host header the client sent to the proxy.
These all make sense.
However, I still can't figure out a real life use case of X-Forwarded-Host. I understand the need to repeat the connection on a different port or using a different scheme, but why would a proxy server ever change the Host header when repeating the request to the target server?
If you use a front-end service like Apigee as the front-end to your APIs, you will need something like X-FORWARDED-HOST to understand what hostname was used to connect to the API, because Apigee gets configured with whatever your backend DNS is, nginx and your app stack only see the Host header as your backend DNS name, not the hostname that was called in the first place.
This is the scenario I worked on today:
Users access certain application server using "https://neaturl.company.com" URL which is pointing to Reverse Proxy. Proxy then terminates SSL and redirects users' requests to the actual application server which has URL of "http://192.168.1.1:5555". The problem is - when application server needed to redirect user to other page on the same server using absolute path, it was using latter URL and users don't have access to this. Using X-Forwarded-Host (+ X-Forwarded-Proto and X-Forwarded-Port) allowed our proxy to tell application server which URL user used originally and thus server started to generate correct absolute path in its responses.
In this case there was no option to stop application server to generate absolute URLs nor configure it for "public url" manually.
I can tell you a real life issue, I had an issue using an IBM portal.
In my case the problem was that the IBM portal has a rest service which retrieves an url for a resource, something like:
{"url":"http://internal.host.name/path"}
What happened?
Simple, when you enter from intranet everything works fine because internalHostName exists but... when the user enter from internet then the proxy is not able to resolve the host name and the portal crashes.
The fix for the IBM portal was to read the X-FORWARDED-HOST header and then change the response to something like:
{"url":"http://internet.host.name/path"}
See that I put internet and not internal in the second response.
For the need for 'x-forwarded-host', I can think of a virtual hosting scenario where there are several internal hosts (internal network) and a reverse proxy sitting in between those hosts and the internet. If the requested host is part of the internal network, the requested host resolves to the reverse proxy IP and the web browser sends the request to the reverse proxy. This reverse proxy finds the appropriate internal host and forwards the request sent by the client to this host. In doing so, the reverse proxy changes the host field to match the internal host and sets the x-forward-host to the actual host requested by the client. More details on reverse proxy can be found in this wikipedia page http://en.wikipedia.org/wiki/Reverse_proxy.
Check this post for details on x-forwarded-for header and a simple demo python script that shows how a web-server can detect the use of a proxy server: x-forwarded-for explained
One example could be a proxy that blocks certain hosts and redirects them to an external block page. In fact, I’m almost certain my school filter does this…
(And the reason they might not just pass on the original Host as Host is because some servers [Nginx?] reject any traffic to the wrong Host.)
X-Forwarded-Host just saved my life. CDNs (or reverse proxy if you'd like to go down to "trees") determine which origin to use by Host header a user comes to them with. Thus, a CDN can't use the same Host header to contact the origin - otherwise, the CDN would go to itself in a loop rather than going to the origin. Thus, the CDN uses either IP address or some dummy FQDN as the Host header fetching content from the origin. Now, the origin may wish to know what was the Host header (aka website name) the content is asked for. In my case, one origin served 2 websites.
Another scenario, you license your app to a host URL then you want to load balance across n > 1 servers.

Resources