Nginx Stripping POST body on proxy_pass - nginx

I have a server running behind a firewall, with a single external IP, which therefore has requests proxied by domain through via an Nginx box.
When I run cURL behind the firewall, everything goes to plan:
HTTP/1.1 200 OK
The cURL is:
curl -H "Content-Type: application/json" -X POST --data #test.json 111.111.111.111/endpoint/ -i
As soon as I run this in Postman/Hurl.it/whatever from outside the network, I get 400 errors. The code throws a 400 when it is missing the POST body (JSON). Echoing this out shows that no JSON is being received.
The relevant Nginx configuration is thus:
server {
listen 80;
server_name domain.co;
location / {
proxy_pass http://111.111.111.111/;
proxy_set_header Host $host;
}
}
The domain does sit behind CloudFlare, and I've switched it to DNS only and tried that - I'd be very surprised if that was the issue.
I've had a look at other solutions, and tried some stuff out but I'm not really sure what I'm doing wrong here, unless I fundamentally misunderstand how proxy_pass works?

Related

Nginx sever block not function as expected

I'm using this config file with nginx:
server {
listen 80;
server_name harrybilney.co.uk;
location / {
proxy_pass http://localhost:8080;
}
}
server {
listen 80;
server_name kyra-mcd.co.uk;
location / {
proxy_pass http://localhost:8080;
}
}
Which is stored in /etc/nginx/sites-avaliable. The server block for the domain kyra-mcd.co.uk works perfectly as expected but the server block for harrybilney.co.uk does not and my browser cannot find the server for harrybilney.co.uk.
Both domains are hosted with GoDaddy and have the exact same DNS settings pointing towards my static IP (IPv4 and IPv6 with A and AAAA records).
Can anyone explain why I'm having this issue as I've tried changing the config but getting luck. I understand this is a very basic config file for nginx but for now I'm just trying to get both domains working on my 1 static IP before I add in anything complex.
Having both server blocks in a single file IS NO PROBLEM!
Here is a default.conf file:
server {
listen 80;
server_name harrybilney.co.uk;
location / {
return 200 "$host\n";
}
}
server {
listen 80;
server_name kyra-mcd.co.uk;
location / {
return 200 "Host should match kyra-mcd.co.uk = $host\n";
}
}
Test and reload your config by issuing sudo nginx -t && sudo nginx -s reload
The curl test:
$# curl -H "Host: kyra-mcd.co.uk" localhost
Host should match kyra-mcd.co.uk = kyra-mcd.co.uk
$# curl -H "Host: harrybilney.co.uk" localhost
harrybilney.co.uk
As you can see both servers are in a single file and the server_name taking care of finding the correct server-block based on the Host header.
Check your DNS one more time. Worh it:
kyra-mcd.co.uk. 600 IN A 90.255.228.109
harrybilney.co.uk. 3600 IN A 90.255.228.109
Looks good to me as well. So the traffic should hit the server.
So your configuration looks good for me. Make sure everything is loaded by issuing sudo nginx -T.
curl is working on my end. So looks like the problem is related to DNS on your end. Can you confirm curl is working from your end as well?

403 response when trying to redirect using nginx to API Gateway

I have the following API: https://kdhdh64g.execute-api.us-east-1.amazonaws.com/dev/user/${user-id} which proxies to a Lambda function.
When the user hits /user/1234 the function checks if 1234 exists and return the info for that user or a redirection to /users
What I want is to create is a redirection with nginx. For SEO, I want a simple 302: return 302 the-url. If someone goes to mySite.com it should redirect to https://kdhdh64g.execute-api.us-east-1.amazonaws.com/dev
No matter what I do, I always receive a 403 with the following:
x-amzn-errortype: MissingAuthenticationTokenException
x-amz-apigw-id: QrFd6GByoJHGf1g=
x-cache: Error from cloud-front
via: 1.1 dfg35721fhfsgdv36vs52fa785f5g.cloudfront.net (CloudFront)
I will appreciate help.
If you are using the reverse proxy set up in nginx, add the below line in the config file and restart or reload the nginx configuration.
proxy_set_header Host $proxy_host;
I run into the same issue trying to run a proxy on Nginx towards an API Gateway which triggers a Lambda function on AWS. When I read the error logs on Nginx, I noticed that it had to do with the SSL version which Nginx was using to connect to API Gateway, the error was the following:
*1 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream
I managed to fix it by adding this line:
proxy_ssl_protocols TLSv1.3;
I attach the complete Nginx configuration in case anyone wants to build a static IP proxy on Nginx that redirects traffic towards a Lambda function:
server {
listen 443 ssl;
server_name $yourservername;
location / {
proxy_pass https://$lambdafunctionaddress;
proxy_ssl_server_name on;
proxy_redirect off;
proxy_ssl_protocols TLSv1.3;
}
ssl_certificate /home/ubuntu/.ssl/ca-chain.crt;
ssl_certificate_key /home/ubuntu/.ssl/server.key;
}
Also it is important to consider that all required information must be included on the request:
curl -X POST https://$yourservername/env/functionname -H "Content-Type: application/json" -H "x-api-key: <yourapikey>" -d $payload

How to ignore Content Length in Nginx?

Faced with a problem: if GET request was supplied with non-zero Content-Length property and there is no body, nginx will not proxy this request. Example of how to reproduce it:
nginx version: nginx/1.10.3 (Ubuntu)
# conf.d/main.conf
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://127.0.0.1:5060;
}
}
When listen 5060 port via netcat: nc -l -p 5060
After, make CURL request:
curl -vvv -X GET http://example.com/ -H 'Content-Length: 1'
This request will be closed after default proxy_send_timeout 60s.
As i understand, Nginx see Content-Length and wait body before transmit request to proxy. How to resolve it, without changing the reqest header?
Please note: i tried to do it with Haproxy - it work despite non-zero Content-Length and zero body.

Nginx Reverse Proxy WebSocket Timeout

I'm using java-websocket for my websocket needs, inside a wowza application, and using nginx for ssl, proxying the requests to java.
The problem is that the connection seems to be cut after exactly 1 hour, server-side. The client-side doesn't even know that it was disconnected for quite some time. I don't want to just adjust the timeout on nginx, I want to understand why the connection is being terminated, as the socket is functioning as usual until it isn't.
EDIT:
Forgot to post the configuration:
location /websocket/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
include conf.d/proxy_websocket;
proxy_connect_timeout 1d;
proxy_send_timeout 1d;
proxy_read_timeout 1d;
}
And that included config:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:1938/;
Nginx/1.12.2
CentOS Linux release 7.5.1804 (Core)
Java WebSocket 1.3.8 (GitHub)
The timeout could be coming from the client, nginx, or the back-end. When you say that it is being cut "server side" I take that to mean that you have demonstrated that it is not the client. Your nginx configuration looks like it shouldn't timeout for 1 day, so that leaves only the back-end.
Test the back-end directly
My first suggestion is that you try connecting directly to the back-end and confirm that the problem still occurs (taking nginx out of the picture for troubleshooting purposes). Note that you can do this with command line utilities like curl, if using a browser is not practical. Here is an example test command:
time curl --trace-ascii curl-dump.txt -i -N \
-H "Host: example.com" \
-H "Connection: Upgrade" \
-H "Upgrade: websocket" \
-H "Sec-WebSocket-Version: 13" \
-H "Sec-WebSocket-Key: BOGUS+KEY+HERE+IS+FINE==" \
http://127.0.0.1:8080
In my (working) case, running the above example stayed open indefinitely (I stopped with Ctrl-C manually) since neither curl nor my server was implementing a timeout. However, when I changed this to go through nginx as a proxy (with default timeout of 1 minute) as shown below I saw a 504 response from nginx after almost exactly 1 minute.
time curl -i -N --insecure \
-H "Host: example.com" \
https://127.0.0.1:443/proxied-path
HTTP/1.1 504 Gateway Time-out
Server: nginx/1.14.2
Date: Thu, 19 Sep 2019 21:37:47 GMT
Content-Type: text/html
Content-Length: 183
Connection: keep-alive
<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>
real 1m0.207s
user 0m0.048s
sys 0m0.042s
Other ideas
Someone mentioned trying proxy_ignore_client_abort but that shouldn't make any difference unless the client is closing the connection. Besides, although that might keep the inner connection open I don't think it is able to keep the end-to-end stream intact.
You may want to try proxy_socket_keepalive, though that requires nginx >= 1.15.6.
Finally, there's a note in the WebSocket proxying doc that hints at a good solution:
Alternatively, the proxied server can be configured to periodically send WebSocket ping frames to reset the timeout and check if the connection is still alive.
If you have control over the back-end and want connections to stay open indefinitely, periodically sending "ping" frames to the client (assuming a web browser is used then no change is needed on the client-side as it is implemented as part of the spec) should prevent the connection from being closed due to inactivity (making proxy_read_timeout unnecessary) no matter how long it's open or how many middle-boxes are involved.
Most likely it's because your configuration for the websocket proxy needs tweaking a little, but since you asked:
There are some challenges that a reverse proxy server faces in
supporting WebSocket. One is that WebSocket is a hop‑by‑hop protocol,
so when a proxy server intercepts an Upgrade request from a client it
needs to send its own Upgrade request to the backend server, including
the appropriate headers. Also, since WebSocket connections are long
lived, as opposed to the typical short‑lived connections used by HTTP,
the reverse proxy needs to allow these connections to remain open,
rather than closing them because they seem to be idle.
Within your location directive which handles your websocket proxying you need to include the headers, this is the example Nginx give:
location /wsapp/ {
proxy_pass http://wsbackend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
This should now work because:
NGINX supports WebSocket by allowing a tunnel to be set up between a
client and a backend server. For NGINX to send the Upgrade request
from the client to the backend server, the Upgrade and Connection
headers must be set explicitly, as in this example
I'd also recommend you have a look at the Nginx Nchan module which adds websocket functionality directly into Nginx. Works well.

Why NGinx don't pass the HOST?

I need to use NGinx as a proxy to another HTTP proxy, and it doesn't works because it doesn't sent the HOST of original url, only the path.
If I perform the request with curl it works and the dump is
curl --proxy http://localhost:81 http://sample.com/sample
http://sample.com/some-path
{ host: 'sample.com' }
If I perform the request with NGinx with the following config - it doesn't works and the dump is (the domain in the path is missing)
upstream proxies {server localhost:81;}
location / {
proxy_set_header Host $host;
proxy_pass http://proxies;
}
/some-path
{ host: 'sample.com' }
How to make NGinx to pass the whole path?
Solution is to add other proxy, for example DeleGate. Yes, NGinx won't pass the HOST properly, but the DeleGate fixes that.
Your Browser or App -> (NGinx -> DeleGate) -> whatever other proxy or app...

Resources