I'am trying to set up basic caching in my openresty nginx webserver. I have tried milion different combinations from many different tutorials, but I can't get it right. Here is my nginx.conf file
user www-data;
worker_processes 4;
pid /run/openresty.pid;
worker_rlimit_nofile 30000;
events {
worker_connections 20000;
}
http {
proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=cache:10m max_size=100m inactive=60m;
proxy_cache_key "$scheme$request_method$host$request_uri";
add_header X-Cache $upstream_cache_status;
include mime.types;
default_type application/octet-stream;
access_log /var/log/openresty/access.log;
error_log /var/log/openresty/error.log;
include ../sites/*;
lua_package_cpath '/usr/local/lib/lua/5.1/?.so;;';
}
And here is my server configuration
server {
# Listen on port 8080.
listen 8080;
listen [::]:8080;
# The document root.
root /var/www/cache;
# Add index.php if you are using PHP.
index index.php index.html index.htm;
# The server name, which isn't relevant in this case, because we only have one.
server_name cache.com;
# Redirect server error pages to the static page /50x.html.
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /var/www/cache;
}
location /test.html {
root /var/www/cache;
default_type text/plain;
try_files $uri /$uri;
expires 1h;
add_header Cache-Control "public";
proxy_cache cache;
proxy_cache_valid 200 301 302 60m;
}
}
Caching should work fine, there is nothing in error.log or access.log, caching system folder is empty, X-Cache header with $upstream_cache_status is not even showing, when I get headers from curl (curl -I). Now in my nginx (openresty) configuration there is no --without-ngx_http_proxy_module flag so the module is there. I have no idea what am I doing wrong please help.
You didn't define anything that can be cached: proxy_cache works togeher with proxy_pass.
The add_header defined inside the http block will be covered the one defined in the server block. Here is the snippet from the document about add_header
There could be several add_header directives. These directives are inherited from the previous level if and only if there are no add_header directives defined on the current level.
If the always parameter is specified (1.7.5), the header field will be added regardless of the response code.
So you cannot see the X-Cache header as expected.
Related
I want to run a local nginx proxy to cache all the responses coming from a remote server.
This should be working but the only outcome I'm getting is a straight redirect from localhost:81 to www.stackoverflow.com. What am I missing?
proxy_cache_path /Temp/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
server {
listen 81;
listen [::]:81;
server_name localhost;
location / {
proxy_pass https://www.stackoverflow.com/;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_ignore_headers Expires Cache-Control Set-Cookie Vary;
add_header X-Proxy-Cache $upstream_cache_status;
}
}
You miss what nginx is reverse proxy server, but not proxy server.
If you still want to do this with nginx, you need to take there are more steps. I found instruction on Russian for this
https://habr.com/ru/post/680992/
I have cache set for accessing to images
proxy_cache_path /cache/images-cache/ levels=1:2 keys_zone=media:1m inactive=365d max_size=500m;
also I have nginx set
server {
server_name localhost;
listen 80;
location ~ "^/(?<id>.+)/(?<width>)/(?<height>)/(?<image>.+)$" {
proxy_pass http://localhost:8888;
proxy_cache media;
proxy_cache_valid 200 365d;
proxy_cache_key $width-$height-$image;
}
How can I set logging so it shows which images are fetched from cache?
You can add a response header
add_header X-Cache-Status $upstream_cache_status always;
This will enable you to check if the URL was hit or not.
https://nginx.org/en/docs/http/ngx_http_upstream_module.html
You can also use this variable $upstream_cache_status in your logs if you want to generate metrics or persist them in the logs.
Follow the example here
https://rtcamp.com/tutorials/nginx/upstream-cache-status-in-access-log/
and add/remove other variables at your convenience.
I have a website using Play! framework with multiple domains proxying to the backend, example.com and example.ca.
I have all http requests on port 80 being rewritten to https on port 443. This is all working as expected.
But when I type into the address bar http://example.com:443, I'm served nginx's default error page, which says
400 Bad Request
The plain HTTP request was sent to HTTPS port
nginx
I'd like to serve my own error page for this, but I just can't seem to get it working. Here's a snipped of my configuration.
upstream my-backend {
server 127.0.0.1:9000;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/example.crt;
ssl_certificate_key /etc/ssl/private/example.key;
keepalive_timeout 70;
server_name example.com;
add_header Strict-Transport-Security max-age=15768000; #six months
location / {
proxy_pass http://my-backend;
}
error_page 400 502 error.html;
location = /error.html {
root /usr/share/nginx/html;
}
}
It works when my Play! application is shut down, but when it's running it always serves up the default nginx page.
I've tried adding the error page configuration to another server block like this
server {
listen 443;
ssl off;
server_name example.com;
error_page [..]
}
But that fails with the browser complaining about the certificate being wrong.
I'd really ultimately like to be able to catch and handle any errors which aren't handled by my Play! application with a custom page, or pages. I'd also like this solution to work if the user manually enters the site's IP into the address bar instead of the server name.
Any help is appreciated.
I found the answer to this here https://stackoverflow.com/a/12610382/4023897.
In my particular case, where I want to serve a static error page under these circumstances, my configuration is as follows
server {
listen 443;
ssl on;
ssl_certificate /etc/ssl/certs/example.crt;
ssl_certificate_key /etc/ssl/private/example.key;
keepalive_timeout 70;
server_name example.com;
add_header Strict-Transport-Security max-age=15768000; #six months
location = /error.html {
root /usr/share/nginx/html;
autoindex off;
}
location / {
proxy_pass http://my-backend;
}
# If they come here using HTTP, bounce them to the correct scheme
error_page 497 https://$host:$server_port/error.html;
}
I moved from the setup of Apache 2 + Varnish to Nginx alone, and I'm kinda stuck with how I should setup/use ESI as well as fastcgi_cache in this setup.
First of all, the idea of ESI was that we setup a reverse proxy layer in front of the server to cache the cache-able parts of a page, then using esi to retrieve the dynamic parts. In my previous setup Varnish was acting as the reverse proxy and Apache only handles the esi requests when necessary.
My question is that now with Nginx acting as the sole server here, how do I make it to work? Do I need to setup another Nginx instance running as a reverse proxy server or something? I couldn't find any document on this.
The second question is regarding fastcgi_cache. I have set it up as described below but the cache does't seem to work for me, no cache file populated and I always get "MISS". I wonder if it's because I need to set max-age/shared-max-age in each controller for each to work?
fastcgi_cache_path /run levels=1:2 keys_zone=www_mysite_com:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
server {
#listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/mysite.com/w/w/w/www/web;
index index.php index.html index.htm;
# Make site accessible from http://www.mysite.com
server_name www.mysite.com;
# Specify a character set
charset utf-8;
# strip app.php/ prefix if it is present
rewrite ^/app\.php/?(.*)$ /$1 permanent;
# h5bp nginx configs
# include conf/h5bp.conf;
location / {
index app.php;
try_files $uri #rewriteapp;
}
location #rewriteapp {
rewrite ^(.*)$ /app.php/$1 last;
}
# Deny access to .htaccess
location ~ /\.ht {
deny all;
}
# Don't log robots.txt or favicon.ico files
location = /favicon.ico { log_not_found off; access_log off; }
location = /robots.txt { access_log off; log_not_found off; }
# 404 errors handled by our application, for instance Symfony
error_page 404 /app.php;
# pass the PHP scripts to FastCGI server from upstream phpfcgi
location ~ ^/(app|app_dev|backend/app|backend/app_dev|config)\.php(/|$) {
fastcgi_split_path_info ^(.+\.php)(/.*)$;
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME web/$fastcgi_script_name;
fastcgi_param HTTPS off;
fastcgi_cache www_mysite_com;
fastcgi_cache_valid 200 60m;
}
# Only for nginx-naxsi : process denied requests
#location /RequestDenied {
# For example, return an error code
#return 418;
#}
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
}
By default, responses from the Symfony 2 application have a cache control header that disables caching:
Cache-Control: no-cache
If you would like nginx to cache pages you will have to change those headers.
You can find general information about caching in the documentation
The simplest solution is to use the SymfonyFrameworkExtraBundle (you already have it if you use the SF2 standard edition) and use annotations on your controllers and/or actions to specify the cache headers. You can find more info about this approach it the docs for the #Cache annotation.
With nginx/0.7.65 I'm getting this error on line 4. Why doesn't it recognize server?
#### CHAT_FRONT ####
server {
listen 7000 default deferred;
server_name example.com;
root /home/deployer/apps/chat_front/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### CHAT_STORE ####
server {
listen 7002 default deferred;
server_name store.example.com;
root /home/deployer/apps/chat_store/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### LOGIN ####
server {
listen 7004 default deferred;
server_name login.example.com;
root /home/deployer/apps/login/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### PERMISSIONS ####
server {
listen 7006 default deferred;
server_name permissions.example.com;
root /home/deployer/apps/permissions/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### SEARCH ####
server {
listen 7008 default deferred;
server_name search.example.com;
root /home/deployer/apps/search/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
#### ANALYTICS ####
server {
listen 7010 default deferred;
server_name analytics.example.com;
root /home/deployer/apps/analytics/current/public;
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
The server directive must be contained in the context of http module. Additionally you are missing top-level events module, which has one obligatory setting, and a bunch of stanzas which are to be in the http module of your config. While nginx documentation is not particularly helpful on creating config from scratch, there are working examples there.
Source: nginx documentation on server directive
Adding a top level entry got around the problem:
events { }
Try to adjust line endings and the encoding of your configuration file. In my case ANSI encoding and possibly "Linux style" line endings (only LF symbols, not both CR and LF symbols) were required.
I know it is a rather old question. However the recommendation of the accepted answer (that the server directive must be contained in the context of http module) might confuse because there are a lot of examples (on the Nginx website also and in this Microsoft guide) where the server directive is not contained in the context of http module.
Possibly the answer of sd z ("I rewrote the *.conf file and it worked") comes from the same reason: there was a .config file with incorrect encoding or line endings which was corrected after the file has been rewrited.
I ran into an issue similar to Hoborg, whose comment pointed me in the right direction. In my case, I had copy and pasted a config from a another post to use for a docker container hosting an Angular app. This was the default config plus one extra line. However, when I pasted, there were a few extra characters included at the beginning of the paste (EF BB BF). The IDE I was using (visual studio) did not display anything to indicate these extra bytes, so it wasn't apparent that they existed. Removing these extra bytes resolved the issue, which I did using a hex editor (HxD) because it was convenient at the time. The windows style line endings did not seem to be an issue in my case.
The default config had server as the outermost directive, so the top answer and error message was indeed confusing. That remained the same as it was originally in my case, and the original file (without the 3 extra bytes) had not thrown the error.
I rewrote the *.conf file and it worked.