Is there any way to add in request header while sending it to proxy server? I tried using add_header as well as proxy_set_header but it did not work for me.
Below is the headers.conf file content I tried:
Trial1:
proxy_set_header X-Name "Vishal";
Trial2:
add_header X-Name "Vishal";
My nginx\conf\includes\proxy.conf:
location /api/mysvc/v1 {
proxy_pass "https://mockable.io/mysvc/v1/";
proxy_pass_request_headers on;
proxy_set_header X-Name "Vishal";
}
I want to pass along this request header in every ajax request my app makes.
Just realised I should not add the proxy_set_header in my headers.conf file. Had to keep only in proxy.conf file for it to work. More so, these headers are not logged in browser debugging tool.
Related
we are using nginx as reverse http proxy before tomcat. Our app can return different HTTP status codes (400, 401, 404, 403) depends on requests. Every response from our app return custom HTTP header, let's say X-Custom which is used to determine if app is working and serving the request.
Our goal on nginx side is to serve custom error pages generated by our app to the client in case when custom HTTP header X-Custom is present and otherwise (if X-Custom is not present) serve our custom static page from nginx.
The problem is, I am not able to find out working solution even I tried almost everything. I am sure that I am missing something obvious.
Nginx configuration looks like:
location / {
proxy_set_header X-Forwarded-Port 80;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_intercept_errors on;
proxy_pass http://172.16.0.32:8080;
if ($is_x_custom_not_ok = "No") {
error_page 400 401 404 403 500 502 503 /503.html;
}
}
map $upstream_http_x_custom $is_x_custom_not_ok {
default "No";
~. "Yes";
}
location /503.html {
root /var/www/;
internal;
}
This is causing that even if header X-Custom is present and app return HTTP 400, static 503.html is shown. 503.html should be loaded only in case when X-Custom is not present in response.
Thanks
So, after endless hours I figured out that this is not going to work. The reason is, that the map and if blocks are executed before passing the upstream (proxy), so map and neither if does not have the x-custom content when they are executed.
So, my solution is to use Nginx OpenResty Lua to examine my custom header and make a decision on what error page to display.
I'm having some issues with our setup that contains:
Proxy -> CDN cache -> Origin.
It's setup so that if we update content on the origin, we purge the CDN cache, and with this I expect to see the update on the live site ( proxy ). But sometimes, around 20-30% of the time this does not reflect on the proxy, even if I see the CDN is updated correctly. And I cant wrap my head around why.
I've turned off the cache on the proxy.
Clearing the cache multiple times will eventually show the correct content for the times the proxy does not show the correct content after purge.
Responses for the different stages below.
The CDN is from Microsoft azure by Verizon
Origin headers:
CDN headers:
Live headers:
Proxy config:
server {
server_name <DOMAIN>;
location / {
proxy_no_cache 1;
proxy_cache_bypass 1;
proxy_redirect off;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_hide_header Cache-Control;
add_header Cache-Control max-age=120; #Browser cache of 120s
proxy_pass "https://<CDN-BASE-URL>$request_uri";
}
}
Looking closer, it seems like the Last-Modified is a missmatch, could this be a cause?
I wrote a REST API in Haskell delivering HTML to be viewed in a browser and am currently trying to host it using Nginx's reverse proxy.
My backend however requires Basic Auth credentials, which the Nginx server doesn't provide.
How can I configure the reverse proxy, so that it asks for credentials when a GET request is made via the browser, but doesn't validate them and passes them on to the backend?
I have tried about half a dozen suggestions on stackoverflow, reddit etc. but haven't found a working solution.
This is my current config:
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8000/;
auth_basic "user-realm";
proxy_set_header X-Forwarded-User $http_authorization;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_pass_header Accept;
proxy_pass_header Server;
proxy_http_version 1.1;
proxy_set_header Authorization $http_authorization;
#proxy_pass_header Authorization;
proxy_set_header ns_server-ui yes;
}
}
A few of the articles I tried or read are the following:
https://www.reddit.com/r/couchbase/comments/2wksmj/authorization_headers_when_using_nginx_as_a/
https://serverfault.com/questions/511206/nginx-forward-http-auth-user
https://serverfault.com/questions/230749/how-to-use-nginx-to-proxy-to-a-host-requiring-authentication
All or most seem to focus on how to let Nginx take over the authorization, however I want it to only pass on the credentials entered in the browser.
After several hours of trying I found out that my config isn't even used since for some reason it always matches with the default config.
I didn't figure out how to solve this, but I got the proxy to work as I intended it when I changed this
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
to this
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://localhost:8000;
}
Nginx then automatically asks for the credentials and passes them on to the backend.
(So in the end my config simply wasn't being used and therefore nothing rerouted to the backend)
Hopefully this saves someone some searching time in the future :)
I have to set custom headers for outgoing request using nginx proxy, so basically I started with trying add_header and proxy_set_header directive in the conf file(snippet has been added), but in the outgoing request these headers were not being added using either of them. Please suggest me an approach to solve this problem.
server {
listen 8095;
access_log logs/host.access.log;
location / {
proxy_pass https://www.dummy-site.com/applications/1232;
proxy_set_header Authorization "some_authorisation";
proxy_set_header referer "referer";
proxy_pass_request_headers on;
}
}
Our system uses POST requests to preload a list of assets. Given the same list of assets identifiers, the server will respond with the same list of asset data. Since the list of identifiers can be very long (it's actually a multipart request containing a JSON list), we used POST instead of GET although it is idempotent.
We use NGINX as a reverse proxy in front of these server. I successfully configured it to work, but there's something that "feels" wrong; I return a Cache-Control: max-age=3600 header in the POST responses that I want cached, and I have NGINX strip them before returning them to the client.
The RFC 7234 says only the method and the uri will be used as a cache key; I could use the Vary header but it seems to be limited to other headers...
I'm not sure how reliable the browser will be. It "seems" that if I make an HTTP POST response cacheable, it will be cached for "future GET requests", which is NOT what I intend.
So, my choices seem to be:
Return a Cache-Control header knowing (or hoping?) that there will be a reverse proxy in front of it stripping that header.
Return a Cache-Control header and let it go through. If someone can explain why it's actually reliable, that would be simple (or if there's another similar header?)
Do not use Cache-Control and instead "hardcode" all URLs directly in my NGINX configuration (I couldn't make this work yet though)
Is there a reliable approach I can use to achieve what I need here? Thanks a lot for your help.
Here's an excerpt of the NGINX configuration if that helps someone:
proxy_cache_path /path/to/nginx/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
location /path/to/post/request {
proxy_pass http://remote-server;
proxy_cache my_cache;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_lock on;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_methods POST;
proxy_cache_key "$scheme$proxy_host$uri$is_args$args|$request_body";
proxy_cache_valid 5m;
client_max_body_size 1500k;
proxy_hide_header Cache-Control;
}