Case:
I have REST API via HTTPS and I want to configure a basic caching proxy service on my host to cache API requests and get the same information faster, as usual.
I have the following configuration of Nginx:
proxy_cache_path /tmp/cache levels=1:2 keys_zone=my_cache:10m max_size=10g
inactive=60m use_temp_path=off;
http {
server {
location /my_api/ {
proxy_redirect off;
proxy_buffering on;
proxy_ignore_headers X-Accel-Expires;
proxy_ignore_headers Expires;
proxy_ignore_headers Cache-Control;
proxy_cache my_cache;
proxy_pass https://example.com/myapi/;
}
}
}
And now I'm comparing the response time from REST API and my local proxy service, and it is the same for the REST API call to the remote service and to my local proxy service with caching, so, it means that caching doesn't work.
Also, the cache directory is empty.
The example or request to the real API (this is not the real case):
curl "https://example.com/myapi/?key=1"
Example of request to proxy:
curl "http://127.0.0.1:8080/myapi/?key=1"
In REST API headers I can see
cache-control: max-age=0, no-cache, no-store, must-revalidate
can Nginx ignore it somehow?
What should I change in the proxy configuration to see the boost for REST API?
I wonder if the issue can be related to HTTPS traffic? Or maybe the response from REST API has some NoChaching headers or the size of the response is too small for caching?
Finally found the way to configure caching for my REST API, here is the final configuration:
http {
proxy_cache_path /tmp/cache levels=1:2 keys_zone=my_cache:10m;
server {
listen 8080;
server_name localhost;
location /myapi {
proxy_buffering on;
proxy_ignore_headers Expires Cache-Control X-Accel-Expires;
proxy_ignore_headers Set-Cookie;
proxy_cache my_cache;
proxy_cache_valid 24h;
proxy_pass https://example.com/myapi;
}
}
In addition, if you are caching a REST api (e.g. GET and POST), I also suggest to add
proxy_cache_methods GET POST;
Related
I want to run a local nginx proxy to cache all the responses coming from a remote server.
This should be working but the only outcome I'm getting is a straight redirect from localhost:81 to www.stackoverflow.com. What am I missing?
proxy_cache_path /Temp/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
server {
listen 81;
listen [::]:81;
server_name localhost;
location / {
proxy_pass https://www.stackoverflow.com/;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_ignore_headers Expires Cache-Control Set-Cookie Vary;
add_header X-Proxy-Cache $upstream_cache_status;
}
}
You miss what nginx is reverse proxy server, but not proxy server.
If you still want to do this with nginx, you need to take there are more steps. I found instruction on Russian for this
https://habr.com/ru/post/680992/
I have got a Ubuntu instance with NGINX installed and configured as a forward proxy on one host for my application on a different host.
My app is making GET requests to NGINX which is making another GET requests to external server (URL to this server is specified in the request) and returning the response to the application.
NGINX is supposed to cache the response from the external server.
I need to respect the Cache-Control header from the response (cache the response that long as this header says) BUT! When there is no Cache-Control header in the response, it must be cached for 12h, what to do to achieve it?
Thanks! :)
Here is my actual config:
proxy_cache_path /tmp/nginx_cache levels=1:2 keys_zone=my_cache:100m max_size=2m inactive=12h use_temp_path=off;
location ~* {
resolver xx.xx.x.xxx;
proxy_cache my_cache;
add_header X-Cache-Status $upstream_cache_status;
if ($http_x_example_use_https = '1') {
proxy_pass https://$host;
}
if ($http_x_example_use_https = '0') {
proxy_pass http://$host;
}
proxy_redirect off;
proxy_connect_timeout 4;
proxy_send_timeout 4;
proxy_read_timeout 4;
send_timeout 4;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_cache_lock on;
}
PS Any thoughts to improve this config or change something? :)
I think I have solved my problem using:
proxy_cache_valid 200 12h;
And when the Cache-Control header is present, it shouldn't be overwritten by this directive above.
NGINX documentation:
Parameters of caching can also be set directly in the response header. This has higher priority than setting of caching time using the directive.
If the header does not include the “X-Accel-Expires” field, parameters of caching may be set in the header fields “Expires” or “Cache-Control”.
I am running into something that is extremely odd. I have the following stack:
ASP.Net Core 3.1 API
Angular 10 front end app
Nginx proxy
All of the applications are containerized so I have my API running in a docker container, my angular app in a docker container (that is also using a separate nginx web server to serve the SPA), and a nginx container serving as a proxy for the API.
Below is a typical GET request that has no issues and the relevant headers for the OPTIONS request:
So a GET request is working but when I try to use POST, the options request succeeds immediately followed by a 400 from nginx along with an error message from the browser:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://restaurantapi.localhost/chats. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
The odd part about above error message is that the OPTIONS request for the POST succeeds:
How is it possible for the OPTIONS request to be successfully returned but the POST fails? I don't understand quite how this is possible. I know its Nginx causing this issue because I have removed the proxy and sent the request directly from my angular app in a container to the API using kestrel web server (built in webserver for .NET core) and it succeeds.
Is there any configuration I am missing causing this problem? Note that I am adding the CORS headers within my API and am not using CORS through nginx. I also tried stripping response headers from API within Nginx and explicitly adding CORS headers and that still fails. Any help on this would be appreciated.
My nginx config:
events {
worker_connections 1024;
}
http {
underscores_in_headers on;
upstream api {
server restaurantapi:5001;
}
upstream grpcservice {
server restaurantapi:5010;
}
# redirect all http requests to https
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
server {
server_name restaurantapi.localhost;
listen 443 ssl http2;
ssl_certificate /etc/certs/resapi.crt;
ssl_certificate_key /etc/certs/resapi.key;
location /CartCheckoutService/ValidateCartCheckout {
grpc_pass grpc://grpcservice;
error_page 502 = /error502grpc;
}
location = /error502grpc {
internal;
default_type application/grpc;
add_header grpc-status 14;
add_header grpc-message "Error connecting to gRPC service.";
return 204;
}
location / {
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Connection keep-alive;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
gzip on;
gzip_vary on;
gzip_proxied no-cache no-store private expired auth;
gzip_types text/plain text/css application/json application/xml;
}
The logs from the API:
The issue was websocket headers being present (http://nginx.org/en/docs/http/websocket.html). I'm not entirely sure nginx does not log a connection error to the upstream server because all the logs displayed was the request to nginx.
Removing the websocket specific headers fixed the issue I was having. I need to add the headers only for websocket requests.
I have problem with Nginx acting as proxy serwer:
request -> NGINX PROXY -> app server (only one)
Proxy server is listening on port 443 and an application server on 80. Headers returned by upstream server are being removed by proxy. I was forced to use:
add_header 'Content-Length' $upstream_http_content_length;
It works ok for Content-Length, however it doesn't work with Last-Modified header. Curl request from Nginx proxy using private IP to upstream returns all the headers. Why does the Nginx proxy cut this header out even if it's return is specified using add_header?
I have following nginx.conf sample:
location /some-web-app {
proxy_pass http://backend/some-web-app;
proxy_redirect off;
proxy_redirect http $scheme;
proxy_set_header Host $host;
add_header 'Last-Modified' $upstream_http_last_modified;
add_header 'Content-Length' $upstream_http_content_length;
sub_filter 'codebase="http' 'codebase="https';
sub_filter_types application/x-java-jnlp-file;
access_log /var/log/nginx/some-web-app_access.log combined_jsession_upstream;
error_log /var/log/nginx/some-web-app_err.log;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/htpasswd;
}
To forward returned last-modified header put following directive in location block:
sub_filter_last_modified on
To forward any other header use add_header with $upstream_http_${header}. Here I'll forward Content-Length header:
add_header 'Content-Length' $upstream_http_content_length;
I want to use nginx as a caching proxy in front of an OCSP responder. 'An OCSP request using the POST method is constructed as follows: The Content-Type header has the value "application/ocsp-request" while the body of the message is the binary value of the DER encoding of the OCSPRequest.' (from RFC2560)
Hence, I configured nginx as follows:
proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=my-cache:8m max_size=1000m inactive=600m;
server {
# Make site accessible from http://localhost/
server_name localhost;
location / {
proxy_pass http://213.154.225.237:80; #ocsp.cacert.org
proxy_cache my-cache;
proxy_cache_methods POST;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_cache_key "$uri$request_body";
expires off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
)
I can access the OCSP responder through nginx and responses are received as expected - no issue.
The problem is that nginx doesn't cache the responses. Nonces are not being sent as part of the request. Using Wireshark I verified that all my requests are identical (on the HTTP layer). How to configure nginx that it caches the responses?
Note, I use the following command for testing:
openssl ocsp -issuer cacert.crt -no_nonce -CAfile CAbundle.crt -url http://localhost/ -serial <SERIAL>
There is a lot more to caching OCSP responses than just caching the DER they are made of. Look into the lightweight OCSP profile and make sure that your responder does include the necessary headers into the response.
I would recommend that you use a specially build OCSP proxy cache, there are many out there. For example Axway's Validation Authority Repeater is a good choice.
In the meanwhile I got the answer at the mailinglist which solved my problem:
You configuration doesn't contain proxy_cache_valid (see
http://nginx.org/r/proxy_cache_valid), and in the same time via
proxy_ignore_headers it ignores all headers which may be used to
set response validity based on response headers. That is, no
responses will be cached with the configuration above.
You probably want to add something like
proxy_cache_valid 200 1d;
to your configuration.
My complete configuration example(works with openca-ocsp):
nginx.conf:
proxy_cache_path /var/cache/nginx/ocsp levels=1:2 min_free=1024M keys_zone=ocsp:10m;
conf.d/ocsp.conf
server {
listen 80;
proxy_cache ocsp;
proxy_cache_valid 200 404 2m;
proxy_cache_min_uses 1;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_cache_methods POST;
proxy_cache_key "$request_uri|$request_body";
add_header X-GG-Cache-Status $upstream_cache_status;
location = /ocsp {
# Allow only POST
limit_except POST {
deny all;
}
proxy_pass http://ocspd:2560/;
}
}