Nginx isn't storing cache - nginx

I'm trying to allow nginx caching in the simplest form. But for some reason it's not working. I'm currently using nginx with gunicorn and flask on an ec2 instance.
This is my /etc/nginx/nginx.conf file:
user nginx;
...
proxy_cache_path /var/cache/nginx keys_zone=mycache:10m;
proxy_cache_methods GET HEAD POST;
server {
listen 80;
access_log /var/log/nginx/agori.access.log main;
error_log /var/log/nginx/agori.error.log;
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache mycache;
proxy_cache_valid any 48h;
proxy_buffering on;
proxy_pass http://unix:/home/ec2-user/src/project.sock;
}
}
when check the /var/cache/nginx folder, it's empty. These are the folders permissions:
drwxrwxrwx 2 nginx root 6 May 13 14:03 nginx
This is the request and respond headers:
PS: This is on mobile(ios)

It sounds to me that something in your Nginx config might not be correct (syntax error or not supported by your Nginx version). In most of the case I encountered so far that was the case for me.
You probably know Nginx' reverse proxy example which features the following configuration
http {
proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g;
server {
location / {
proxy_pass http://1.2.3.4;
proxy_set_header Host $host;
proxy_buffering on;
proxy_cache STATIC;
proxy_cache_valid 200 1d;
proxy_cache_use_stale error timeout invalid_header updating
http_500 http_502 http_503 http_504;
}
}
}
I tried to compare that with your configuration file and I my debugging approach would be:
Does nginx log your requests in access_log?
Try to remove whether the example configuration file works after minimal modifications.
Replace the any with a 200 for a start and see whether that works.
If that works, put in step by step all additional config lines like the proxy_cache_methods line.

Related

nginx invalid URL prefix with rewrite

I'm using docker and running nginx alongside varnish.
Because I'm running docker, I've set the resolver manually at the top of the nginx configuration (resolver 127.0.0.11 ipv6=off valid=10s;) so that changes to container IPs will be picked up without needing to restart nginx.
This is the relevant part of the config that's giving me trouble:
location ~^/([a-zA-Z0-9/]+)$ {
set $args ''; #clear out the entire query string
set $card_name $1;
set $card_name $card_name_lowercase;
rewrite ^ /cards?card=$card_name break;
proxy_set_header x-cache-key card-type-$card_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_set_header REQUEST_URI $request_uri;
proxy_http_version 1.1;
set $backend "http://varnish:80";
proxy_pass $backend;
proxy_intercept_errors on;
proxy_connect_timeout 60s;
proxy_send_timeout 86400s;
proxy_read_timeout 86400s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
error_page 503 /maintenance.html;
}
When I visit a URL for this, e.g. https://example.com/Test, I get 500 internal server error.
In the nginx error log, I see the following:
2022/04/27 23:59:45 [error] 53#53: *1 invalid URL prefix in "", client: 10.211.55.2, server: example.com, request: "GET /Test HTTP/2.0", host: "example.com"
I'm not sure what's causing this issue -- http:// is included in the backend, so it does have a proper prefix.
If I just use proxy_pass http://varnish:80, it works fine, but the backend needs to be a variable in order to force docker to use the resolver.
I've stumble across similar issue. I'm not sure why but defining the
set $backend "http://varnish:80";
outside of location block

nginx: proxy_pass a tracking script form plausible to bypass adblockers - script location results in 403 - with nginx reverse proxy and wordpress

I am using plausible analytics and I would like to bypass adblockers. According to plausibles docs this is possible with a simple proxy_pass part in the nginx config file:
# Only needed if you cache the plausible script. Speeds things up.
# Note: to use the `proxy_cache` setup, you'll need to make sure the `/var/run/nginx-cache`
# directory exists (e.g. creating it in a build step with `mkdir -p /var/run/nginx-cache`)
proxy_cache_path /var/run/nginx-cache/jscache levels=1:2 keys_zone=jscache:100m inactive=30d use_temp_path=off max_size=100m;
server {
...
location = /js/script.js { ######## <-- my problems lies here
proxy_pass https://plausible.io/js/plausible.js;
# Tiny, negligible performance improvement. Very optional.
proxy_buffering on;
# Cache the script for 6 hours, as long as plausible.io returns a valid response
proxy_cache jscache;
proxy_cache_valid 200 6h;
proxy_cache_use_stale updating error timeout invalid_header http_500;
# Optional. Adds a header to tell if you got a cache hit or miss
add_header X-Cache $upstream_cache_status;
}
location = /api/event {
proxy_pass https://plausible.io/api/event;
proxy_buffering on;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
}
}
My problem lies in the location part. I have tried all kinds of locations such as:
location = /wp-content/themes/Avada-Child-Theme/script_pa.js {...
but it always results in a 403. The read/write-rights are:
A ls -la to the child theme folder says:
drwxr-xr-x 6 www-data www-data 4096 Feb 21 17:01 Avada-Child-Theme
I have asked this in the support forum of plausible but there has not been a reaction so far.
I am using the reverse nginx proxy by jwilder
What am I missing?
Thank you!

How to make Jenkins accessible by hostname?

I created an Ubuntu 19.10 VirtualBox VM and installed installed there OpenJDK 8, Nginx 1.16.1, and Jenkins 2.222.1. I can access via HTTP IP address, like http://{IP_OF_THE_VM}:8080. Now I want also to be able to access it by the hostname like https://jenkins.ciserver.loc/.
Here is the VHost file /etc/nginx/sites-available/jenkins.ciserver.loc:
upstream jenkins {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name jenkins.ciserver.loc;
access_log /var/log/nginx/jenkins.access.log;
error_log /var/log/nginx/jenkins.error.log;
proxy_buffers 16 64k;
proxy_buffer_size 128k;
location / {
proxy_pass http://jenkins;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
}
When I request http://ci.ciserver.loc in the browser, "This site can’t be reached" and the request end up in a ERR_SOCKET_NOT_CONNECTED.
How to configure Jenkins and/or Nginx correctly to make Jenkins accessible by the hostname?
SOLVED
It was a stupid typo... I set server_name to jenkins.ciserver.loc, but was all the time trying to request ci.ciserver.loc. Now I corrected the requested URL to http://jenkins.ciserver.loc -- and it started working.

Nginx upstream failure configuration file

I'm trying to start up my node service on my nginx webserver but I keep getting this error when I try to do nginx -t
nginx: [emerg] "upstream" directive is not allowed here in /etc/nginx/nginx.conf:3
nginx: configuration file /etc/nginx/nginx.conf test failed
My current nginx.conf is like this:
upstream backend {
server 127.0.0.1:5555;
}
map $sent_http_content_type $charset {
~^text/ utf-8;
}
server {
listen 80;
listen [::]:80;
server_name mywebsite.com;
server_tokens off;
client_max_body_size 100M; # Change this to the max file size you want to allow
charset $charset;
charset_types *;
# Uncomment if you are running behind CloudFlare.
# This requires NGINX compiled from source with:
# --with-http_realip_module
#include /path/to/real-ip-from-cf;
location / {
add_header Access-Control-Allow-Origin *;
root /path/to/your/uploads/folder;
try_files $uri #proxy;
}
location #proxy {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://backend;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I tried to look up some solutions but nothing seem to work for my situation.
Edit: Yes, I did edit the paths and placeholders properly.
tldr; The upstream directive must be embedded inside an http block.
nginx configuration files usually have events and http blocks at the top-most level, and then server, upstream, and other directives nested inside http. Something like this:
events {
worker_connections 768;
}
http {
upstream foo {
server localhost:8000;
}
server {
listen 80;
...
}
}
Sometimes, instead of nesting the server block explicitly, the configuration is spread across multiple files and the include directive is used to "merge" them all together:
http {
include /etc/nginx/sites-enabled/*;
}
Your config doesn't show us an enclosing http block, so you are most likely running nginx -t against a partial config. You should either a) add those enclosing blocks to your config, or b) rename this file and issue an include for it within your main nginx.conf to pull everything together.

nginx removes content length http header after proxy_pass

I have a problem that nginx removes the content-length header after the proxy pass. the application back-end sends a gzip stream but specifies the content length. NGINX changes the type content-type to chunked and removes the content length header. This is not acceptable for the app since it is read not by the browser but by an proprietary app that requires the content-legth to be specified.
After specified chunked_transfer_encoding off; it stops adding the content type header but still removes the content length. How to disable any header modifications in nginx?
The confing:
upstream backend {
server 127.0.0.1:9090;
}
server {
root /usr/share/nginx/www;
index index.html index.htm;
chunked_transfer_encoding off;
location / {
proxy_pass http://backend;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This was a known bug in nginx in the past. Update to the latest build.
http://forum.nginx.org/read.php?2,216085,216085#msg-216085

Resources