nginx ignore missing HTTP headers - nginx

Is there any way to tell nginx to ignore missing HTTP headers when proxying requests?
There is an existing proprietary HTTP Server sending requests without any header. The Server can not be configured. I need various endpoints from this server in a web application. Therefore I want to setup my nginx to proxy requests to this server. I have location configuration in my regular server.
location /api/ {
proxy_pass http://localhost:80/;
}
When calling corresponding URIs nginx complains:
upstream sent no valid HTTP/1.0 header while reading response header from upstream, client: 127.0.0.1, server: localhost, request: ....
Is there any way to tell nginx not to expect any headers, and just to forward the received payload?
Thanks for your help!
Kind regards,
Andreas
Edit: Ok, found server is using http 0.9, when calling curl directly to the server threw an error:
curl: (1) Received HTTP/0.9 when not allowed
Using the option --http0.9 got the desired result. (Which is received in a browser without further ado). Any chance to tell nginx to proxy to an http 0.9 server?

You can configure nginx to ignore missing headers by setting the proxy_ignore_headers directive to "X-Accel-Expires" and "Expires". This will tell nginx to ignore those specific headers and not expect them to be present when proxying requests.
You can add this to the location block in your nginx configuration:
proxy_ignore_headers X-Accel-Expires Expires;
Additionally, proxy_pass_request_headers off; could be used, this tells nginx to ignore all headers passed to the proxied server.
As for the payload, as long as the underlying protocol is HTTP, it should be fine, as the payload will be sent in the body of the HTTP request.
So your location block could look like this
location /api/ {
proxy_pass http://localhost:80/;
proxy_pass_request_headers off;
}
Please note that this is may not be recommended, as the missing headers may contain important information like authentication and should be handled correctly.

Related

nginx forward request with the same domain

I'm not an expert on networking or servers but I need to configure Nginx server, this server will listen two different external addresses:
https://fake.net
https://example.com
I have a Node.js application locally in this server deployed in: http://localhost:3020
I'm trying to proxy from nginx to the Node.js app but, the thing is that I need the Node.js api to received the request with the original url request.
Is there any way to forward the request in this way:
Request: https://fake.net/api/test -------> In the Node app received: http://fake.net/api/test
Request: https://example.com/api/test1 -------> In the Node app received: http://example.net/api/test1
The Host header defines the domain part of the proxied request. Generally $host is used to get the value from the original request. See this document for details.
The request is passed transparently if no optional URI is provided to the proxy_pass directive. See this document for details.
For example:
location /api/ {
proxy_set_header Host $host;
proxy_pass http://localhost:3020;
}

NGINX Reverse proxy response

I am using an NGINX server as a reverse proxy. The NGINX server accepts a request from an external client (HTTP or HTTPS doesn't matter) and passes this request to a backend server. The backend server returns "a" URL to the client that will have another URL that it should use to make subsequent API calls. I want this returned URL to have the NGIX host and port number instead of the backend service host and port number so that my backend server details are never exposed. For e.g.
1) Client request:
http://nginx_server:8080
2) Nginx receives this and passes it to the backend running with some functionality at
http://backend_server:8090
3) The backend server receives this request and passes another URL to the client http://backend_server:8090/allok.
4) The client uses this URL and makes another subsequent API calls.
What I want is that in step 4 in the response the "backend_server:port" is replaced by the nginx server and port from the initial request. For e.g
http://nginx_server:8080/allok
However, the response goes back as
http://backend_server:8090/allok
my nginx.conf
http {
server {
listen 8080; --> Client request port
server_name localhost;
location / {
proxy_pass http://localhost:8090; ---> Backend server port. The backend
service and NGINX will always be on the same
machine
proxy_redirect http://localhost:8090 http://localhost:8080; --> Not sure if this is
correct. Doesn't seem to do what I want to achieve
# proxy_set_header Host $host;
}
}
}
Thanks in advance
I was able to resolve it. I had to eliminate the proxy_redirect directive from the config.

How to Remove Client Headers in Nginx before passing request to upstream server?

The Upstream server is wowza , which does not accept the custom headers if I don't enable them on application level.
Nginx is working as a proxy server, from the browser I want to send few custom headers which should be received and logged by Nginx Proxy but before forwarding request to upstream server those headers should be removed from the request.
So upstream server never come to know that there where any custom headers.
I tried proxy_hide_header as well as proxy_set_header "<header>" "" , but seems like they apply to response headers not the request headers.
And even if I accept to enable the headers on wowza, then again I am not able to find a way to enable headers at server level for all application. Currenlty I have to add headers to each newly created application which is not feasible for me to do.
Any help would be appreciated.
The proxy_set_header HEADER "" does exactly what you expect. See https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header.
If the value of a header field is an empty string then this field will not be passed to a proxied server:
proxy_set_header Accept-Encoding "";
I have just confirmed this is working as documented, I used Nginx v1.12.

Turning of gzip on nginx for some uris

I have this nodejs app behind nginx, let's call it A. Some requests to the app are proxied to another nginx/nodejs server (this is B, a backend API) by the nodejs app (not by nginx).
Currently, both nginx enable gzip compression, which is an issue. Responses from B to A are compressed, and A compresses them again, making the response unusable by the browser.
The ideal solution would be to tell A not to compress responses for requests transferred to B. Unfortunately, I can't get it to work. Here's my nginx config on A:
gzip on;
gzip_types application/json;
location / {
if ($request ~ /api/) {
gzip off;
}
# proxy to the node app
proxy_redirect off;
proxy_pass http://wtb-backoffice;
}
With that setup, proxied requests are still double-compressed. If I replace gzip off with return 404, surely my /api/smth requests return a 404, so the condition is right.
If I turn off compression on server B and enable it on A without the location condition to turn it off, the content is compressed once, and thus readable - which makes sense. But with the gzip off condition present, responses are received uncompressed.
So I conclude that the gzip off directive works only when the raw content is uncompressed. Otherwise it will stupidly compress it again. Does it make sense for someone, and how can I fix that?
BTW, before you suggest stopping proxying or using nginx for the proxy: A and B do not use the same authentication mechanisms, so the proxy controller on A does some magic and cannot be bypassed.

How to fix Sinatra redirecting https to http under nginx

I have a Sinatra app running in nginx (using thin as a back-proxy) and I'm using redirect '/<path>' statements in Sinatra. However, when I access the site under https, those redirects send me to http://localhost/<path> rather than to https://localhost/<path> as they should.
Currently, nginx passes control to thin with this command proxy_pass http://thin_cluster, where thin_cluster is
upstream thin_cluster { server unix:/tmp/thin.cct.0.sock; }
How can I fix this?
In order for Sinatra to correctly assemble the url used for redirects, it needs to be able to determine whether the request is using ssl, so that the redirect can be made using http or https as appropriate.
Obviously the actual call to thin isn't using ssl, as this is being handled by the front end web server, and the proxied request is in the clear. We therefore need a way to tell Sinatra that it should treat the request as secure, even though it isn't actually using ssl.
Ultimately the code that determines whether the request should be treated as secure is in the Rack::Request#ssl? and Rack::Request#scheme methods. The scheme methods examines the env hash to see if one of a number of entries are present. One of these is HTTP_X_FORWARDED_PROTO which corresponds to the X-Forwarded-Proto HTTP header. If this is set, then the value is used as the protocol scheme (http or https).
So if we add this HTTP header to the request when it is proxied from nginx to the back end, Sinatra will be able to correctly determine when to redirect to https. In nginx we can add headers to proxied requests with proxy_set_header, and the scheme is available in the $scheme variable.
So adding the line
proxy_set_header X-Forwarded-Proto $scheme;
to the nginx configuration after the proxy_pass line should make it work.
You can force all links to go to https in the nginx layer.
in nginx.conf:
server{
listen 80;
server_name example.com;
rewrite ^(.*) https://$server_name$1 redirect;
}
This is good to have too to assure that your requests are always https

Resources