I am trying to setup nginx to handle file uploads and pass the file information on to a backend server once done. I came across a post at https://coderwall.com/p/swgfvw that shows how to do this and I am able to see a file being uploaded to the /tmp directory. However I would like to also pass on the file name and type (Content-Disposition and Content-Type) to the backend server.
I tried capturing what is received at the http server port and see the below,
POST /upload HTTP/1.1
User-Agent: curl/7.32.0
Host: MyHostName
Accept: */*
Content-Length: 4431
Expect: 100-continue
Content-Type: multipart/form-data; boundary=------------------------6060af4f937c14c9
--------------------------6060af4f937c14c9
Content-Disposition: form-data; name="filedata"; filename="sessions.txt"
Content-Type: text/plain
followed by the data.
My nginx location block for upload is,
location /upload {
limit_except POST { deny all; }
client_body_temp_path /tmp/;
client_body_in_file_only on;
client_body_buffer_size 128K;
client_max_body_size 100M;
proxy_redirect off;
proxy_set_header X-FILE $request_body_file;
proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Connection "";
proxy_pass_request_headers on;
proxy_set_body off;
proxy_http_version 1.1;
proxy_pass http://my_backend;
}
With this I am able to pass on and receive the following at my backend,
'content-type': 'multipart/form-data; boundary=------------------------6060af4f937c14c9'
'x-file': '/tmp/0000000001'
but would really like to know how I can get the
Content-Disposition: form-data; name="filedata"; filename="sessions.txt"
Content-Type: text/plain
to my backend. Any help with this is much appreciated.
P.S: hope its ok for this question here? (tried superuser but it doesn't seem to have much activity)
if the header is being ignored, try
proxy_pass_header Content-Disposition;
or directly pass
proxy_set_header Content-Disposition $http_content_disposition;
The underscores in custom headers are silently ignored in nginx, a option that might help
underscores_in_headers on;
Related
I have not been able to pass an application specific header to my application that is running on uWSGI and Flask
This is from my nginx.conf
proxy_pass http://localhost:5000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass_request_headers on;
proxy_set_header $HTTP_Chart-Type $http_chart_type;
}
These are my headers from chrome:
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Cache-Control: no-cache
chart_type: line
I am trying to pass the chart_type header to my backend.
Thanks
The default value of proxy_pass_request_headers is on, so there should be no need to explicitly set it (unless it is turned off in the config somewhere and that has an effect on your configuration).
With the default setting (on), Nginx passes all headers to the backend, so you don't need any special configuration to pass a custom header (assuming your http_chart_type is a custom header).
Your problems passing the http_chart_type header from Nginx to the backend is likely related to Nginx by default not allowing header names with underscore. See https://stackoverflow.com/a/74798560/3571 .
I have to set custom headers for outgoing request using nginx proxy, so basically I started with trying add_header and proxy_set_header directive in the conf file(snippet has been added), but in the outgoing request these headers were not being added using either of them. Please suggest me an approach to solve this problem.
server {
listen 8095;
access_log logs/host.access.log;
location / {
proxy_pass https://www.dummy-site.com/applications/1232;
proxy_set_header Authorization "some_authorisation";
proxy_set_header referer "referer";
proxy_pass_request_headers on;
}
}
So I have set up a reverse proxy to tunnel my application.
Unfortunately the application thinks it is served via http and not https and gives out URLs with port 80.
How can I handle this in the nginx reverse proxy? (by rewriting maybe)
When I go on the page:
https://my.server.com
index.php loads, everything is okay
after clicking something I have a URL like this:
https://my.server.com:80/page/stuff/?redirect_to
which throws an error within the browser because my reverse proxy doesn't serve SSL on port 80.
How can I migitate this?
My current nginx ssl vhost for the site:
... ssl stuff ...
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
location / {
proxy_pass http://localhost:22228;
proxy_buffering off;
proxy_redirect off;
proxy_read_timeout 43800;
proxy_pass_request_headers on;
proxy_set_header Connection "Keep-Alive";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass_header Content-Type;
proxy_pass_header Content-Disposition;
proxy_pass_header Content-Length;
proxy_set_header X-Forwarded-Proto https;
}
(yes I know my request headers look like a christmas tree 🎄)
Also bonus points if you show where the documentation addressing this issue is and what the mechanism is called.
For rewriting response body you can use http_sub_module:
location / {
proxy_pass http://localhost:22228;
sub_filter_once off;
sub_filter_types text/css application/javascript; # in addition to text/html
sub_filter "//my.server.com:80/" "//my.server.com/";
}
Many people says (1, 2) that you need to disable compression when using sub_filter directive:
proxy_set_header Accept-Encoding "";
For me, it works fine without this line in config, but it can be a feature of OpenResty which I use instead of nginx.
If your app generates HTTP 30x redirects with explicit indication of domain:port, you can rewrite Location header value with the proxy_redirect directive:
proxy_redirect //my.server.com:80/ //my.server.com/;
Our system uses POST requests to preload a list of assets. Given the same list of assets identifiers, the server will respond with the same list of asset data. Since the list of identifiers can be very long (it's actually a multipart request containing a JSON list), we used POST instead of GET although it is idempotent.
We use NGINX as a reverse proxy in front of these server. I successfully configured it to work, but there's something that "feels" wrong; I return a Cache-Control: max-age=3600 header in the POST responses that I want cached, and I have NGINX strip them before returning them to the client.
The RFC 7234 says only the method and the uri will be used as a cache key; I could use the Vary header but it seems to be limited to other headers...
I'm not sure how reliable the browser will be. It "seems" that if I make an HTTP POST response cacheable, it will be cached for "future GET requests", which is NOT what I intend.
So, my choices seem to be:
Return a Cache-Control header knowing (or hoping?) that there will be a reverse proxy in front of it stripping that header.
Return a Cache-Control header and let it go through. If someone can explain why it's actually reliable, that would be simple (or if there's another similar header?)
Do not use Cache-Control and instead "hardcode" all URLs directly in my NGINX configuration (I couldn't make this work yet though)
Is there a reliable approach I can use to achieve what I need here? Thanks a lot for your help.
Here's an excerpt of the NGINX configuration if that helps someone:
proxy_cache_path /path/to/nginx/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
location /path/to/post/request {
proxy_pass http://remote-server;
proxy_cache my_cache;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_lock on;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_methods POST;
proxy_cache_key "$scheme$proxy_host$uri$is_args$args|$request_body";
proxy_cache_valid 5m;
client_max_body_size 1500k;
proxy_hide_header Cache-Control;
}
I have a nginx configuration that looks like this:
location /textproxy {
proxy_pass http://$arg_url;
proxy_connect_timeout 1s;
proxy_redirect off;
proxy_hide_header Content-Type;
proxy_hide_header Content-Disposition;
add_header Content-Type "text/plain";
proxy_set_header Host $host;
}
The idea is that this proxies to a remote url, and rewrites the content header to be text/plain.
For example I would call:
http://nx/textproxy?url=http://foo:50070/a/b/c?arg=abc:123
And it would return the contents of http://foo:50070/a/b/c?arg=abc:123, but wrapped with a text/plain header.
This doesn't seem to work though, I constantly get 'invalid upstream port' errors:
2013/07/23 19:05:10 [error] 25292#0: *963177 invalid port in upstream "http://foo:50070/a/b/c?arg=abc:123", client: xx.xxx.xx.xx, server: ~^(?<h>nx)(\..+|)$, request: "GET /textproxy?url=http://foo:50070/a/b/c?arg=abc:123 HTTP/1.1", host: "nx"
Any ideas? I'm having a hard time trying to work it out.