SOLVED - Installing nextcloud on nginx, getting error 500 - nginx

I'm following the instructions to install Nextcloud on an nginx server.
I copy the configuration from the offical documentation, i set my server name and my ssl certificate path, and when i try to reach nextcloud from my browser i get
"500 Internal server error".
When i check in the error.log i get
rewrite or internal redirection cycle while processing "/index.php"
This is my configuration file:
upstream php-handler {
#server 127.0.0.1:9000;
server unix:/var/run/php/php7.3-fpm.sock;
}
server {
listen 80;
listen [::]:80;
server_name mrbackslash.tk;
# enforce https
return 301 https://$server_name:443$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mrbackslash.tk;
# Use Mozilla's guidelines for SSL/TLS settings
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
# NOTE: some settings below might be redundant
ssl_certificate /etc/ssl/mrbackslash_tk_cert.crt;
ssl_certificate_key /etc/ssl/mrbackslash_tk_key.key;
# Add headers to serve security related headers
# Before enabling Strict-Transport-Security headers please read into this
# topic first.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
# Path to the root of your installation
root /var/www/mrbackslash.tk/public-html/;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# The following 2 rules are only needed for the user_webfinger app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
# The following rule is only needed for the Social app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
location = /.well-known/carddav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host:$server_port/remote.php/dav;
}
# set max upload size
client_max_body_size 512M;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Uncomment if your server is build with the ngx_pagespeed module
# This module is currently not supported.
#pagespeed off;
location / {
rewrite ^ /index.php$request_uri;
}
location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ {
deny all;
}
location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^/(?:index|remote|public|cron|core/ajax/update|status|ocs/v[12]|updater/.+|ocs-provider/.+)\.php(?:$|/) {
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param HTTPS on;
#Avoid sending the security headers twice
fastcgi_param modHeadersAvailable true;
fastcgi_param front_controller_active true;
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
}
location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) {
try_files $uri/ =404;
index index.php;
}
# Adding the cache control header for js, css and map files
# Make sure it is BELOW the PHP block
location ~ \.(?:css|js|woff2?|svg|gif|map)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
# Add headers to serve security related headers (It is intended to
# have those duplicated to the ones above)
# Before enabling Strict-Transport-Security headers please read into
# this topic first.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;" always;
#
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
# Optional: Don't log access to assets
access_log off;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap|mp4|webm)$ {
try_files $uri /index.php$request_uri;
# Optional: Don't log access to other assets
access_log off;
}
}
Help!

I solved the issue by re-uploading the configuration file via ftp, pasting it in nano on the ssh shell was a bad idea!

Related

nginx Nextcloud config not working (ERR_TOO_MANY_REDIRECTS)

I wanted to switch from apache2 to nginx. Now I tried moving my nextcloud config to nginx (using following "template": https://docs.nextcloud.com/server/latest/admin_manual/installation/nginx.html) but I only get ERR_TOO_MANY_REDIRECTS (since it worked before switching to nginx, it should be a nginx configuration problem).
upstream php-handler {
server 127.0.0.1:9000;
#server unix:/var/run/php/php7.4-fpm.sock;
}
# Set the `immutable` cache control options only for assets with a cache busting `v` argument
map $arg_v $asset_immutable {
"" "";
default "immutable";
}
server {
listen 80;
listen [::]:80;
server_name cloud.link.com;
# Prevent nginx HTTP Server Detection
server_tokens off;
# Enforce HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name cloud.link.com;
# Path to the root of your installation
root /var/www/cloud.link.com;
# Use Mozilla's guidelines for SSL/TLS settings
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
ssl_certificate /root/cloudflare/file.pem;
ssl_certificate_key /root/cloudflare/file.key;
# Prevent nginx HTTP Server Detection
server_tokens off;
# HSTS settings
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;
# set max upload size and increase upload timeout:
client_max_body_size 512M;
client_body_timeout 300s;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Pagespeed is not supported by Nextcloud, so if your server is built
# with the `ngx_pagespeed` module, uncomment this line to disable it.
#pagespeed off;
# The settings allows you to optimize the HTTP2 bandwitdth.
# See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
# for tunning hints
client_body_buffer_size 512k;
# HTTP response headers borrowed from Nextcloud `.htaccess`
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
# Specify how to handle directories -- specifying `/index.php$request_uri`
# here as the fallback means that Nginx always exhibits the desired behaviour
# when a client requests a path that corresponds to a directory that exists
# on the server. In particular, if that directory contains an index.php file,
# that file is correctly served; if it doesn't, then the request is passed to
# the front-end controller. This consistent behaviour means that we don't need
# to specify custom rules for certain paths (e.g. images and other assets,
# `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
# `try_files $uri $uri/ /index.php$request_uri`
# always provides the desired behaviour.
index index.php index.html /index.php$request_uri;
# Rule borrowed from `.htaccess` to handle Microsoft DAV clients
location = / {
if ( $http_user_agent ~ ^DavClnt ) {
return 302 /remote.php/webdav/$is_args$args;
}
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Make a regex exception for `/.well-known` so that clients can still
# access it despite the existence of the regex rule
# `location ~ /(\.|autotest|...)` which would otherwise handle requests
# for `/.well-known`.
location ^~ /.well-known {
# The rules in this block are an adaptation of the rules
# in `.htaccess` that concern `/.well-known`.
location = /.well-known/carddav { return 301 /remote.php/dav/; }
location = /.well-known/caldav { return 301 /remote.php/dav/; }
location /.well-known/acme-challenge { try_files $uri $uri/ =404; }
location /.well-known/pki-validation { try_files $uri $uri/ =404; }
# Let Nextcloud's API for `/.well-known` URIs handle all other
# requests by passing them to the front-end controller.
return 301 /index.php$request_uri;
}
# Rules borrowed from `.htaccess` to hide certain paths from clients
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; }
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; }
# Ensure this block, which passes PHP files to the PHP process, is above the blocks
# which handle static assets (as seen below). If this block is not declared first,
# then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
# to the URI, resulting in a HTTP 500 error response.
location ~ \.php(?:$|/) {
# Required for legacy support
rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice
fastcgi_param front_controller_active true; # Enable pretty urls
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_max_temp_file_size 0;
}
location ~ \.(?:css|js|svg|gif|png|jpg|ico|wasm|tflite|map)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463, $asset_immutable";
access_log off; # Optional: Don't log access to assets
location ~ \.wasm$ {
default_type application/wasm;
}
}
location ~ \.woff2?$ {
try_files $uri /index.php$request_uri;
expires 7d; # Cache-Control policy borrowed from `.htaccess`
access_log off; # Optional: Don't log access to assets
}
# Rule borrowed from `.htaccess`
location /remote {
return 301 /remote.php$request_uri;
}
location / {
try_files $uri $uri/ /index.php$request_uri;
}
}
Does anyone have a solution for this problem?

How to fix Nginx cache so he will retain the cache as instrcted

I have an Nginx server with cache enabled, and when I analyzed the HIT-MISS, I discovered that Nginx marked as MISS many files that he fetched just a few minutes ago.
My configuration file looks like this:
http {
proxy_cache_path /usr/local/openresty/nginxCeche levels=1:2 keys_zone=my_cache:20g max_size=900g use_temp_path=off inactive=1y;
log_format json_combined escape=json
'{'
'"remote_addr":"$remote_addr",'
'"remote_user":"$remote_user",'
'"time_local":"$time_local",'
'"status":"$status",'
'"bytes_sent":"$bytes_sent",'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent",'
'"upstream_cache_status":"$upstream_cache_status",'
'"request_uri":"$request_uri",'
'"limit_rate":"$limit_rate",'
'"host":"$host",'
'"hostname":"$hostname",'
'"request_time":"$request_time",'
'"upstream_http_V1Latency":"$upstream_http_V1Latency",'
'"upstream_http_ProxyCache":"$upstream_http_ProxyCache",'
'"server_addr":"$server_addr"'
'}';
# general performence settings
charset utf-8;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
log_not_found off;
types_hash_max_size 2048;
types_hash_bucket_size 64;
client_max_body_size 100M;
# using threads
aio threads;
# Limits - so it will be harder to DOS me
limit_req_log_level warn;
limit_req_zone $binary_remote_addr zone=login:10m rate=10r/m;
# Proxy server
upstream proxy_backend {
server proxyservice;
}
server {
listen 80;
keepalive_timeout 70;
# security headers
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'; frame-ancestors 'self';" always;
add_header Permissions-Policy "interest-cohort=()" always;
# . files
location ~ /\.(?!well-known) {
deny all;
}
# restrict methods
if ($request_method !~ ^(GET|HEAD)$) {
return '405';
}
##
# Logging Settings - added compreion and buffer to ease on the file access
##
access_log /var/log/nginx/access.log_1 json_combined buffer=64k gzip;
##
# Gzip Settings
##
gzip on;
location ~ "^\/next(?<myurl>\/(?:[0-9A-Fa-f]{2}){16}\/(?:[0-9A-Fa-f]{2}){16}\/(?:.?[^\/]+))$" {
# solve the issue of gateway timeout
proxy_read_timeout 300s;
proxy_cache my_cache;
# at this stage the $uri is the sky link
proxy_cache_key $continue_url;
# set the ceching time for 200 respinse -> to 60 days
proxy_cache_valid 200 60d;
# Clear flags I dont want the clients to see
more_clear_headers 'Access-Control-Allow-Headers';
more_clear_headers 'Access-Control-Allow-Origin';
more_clear_headers 'access*';
more_clear_headers 'content-disposition';
more_clear_headers 'Date';
more_clear_headers 'x-proxy-cache';
more_clear_headers 'Server';
# add headers to handle CORS
more_set_headers 'Access-Control-Allow-Origin: *';
add_header Access-Control-Allow-Headers '*';
# to reduce Bandwisth I use compression
gzip on;
# set up the proxy, and instruct it to cech even if thre is no Cache-Control
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_cache_lock on;
# the rust base reverse proxy
proxy_pass http://proxy_backend$continue_url;
# set local ceching on the client side currently we will start at 365 days
expires 365d;
add_header Pragma public;
add_header Cache-Control "public";
add_header Content-Type "application/vnd.apple.mpegurl";
}
}
}
I will explain what I have tried to achieve here:
I wanted Nginx to delete the cache only if there isn't enough memory, so that it had an LRU cache.
Since I couldn't find this option, I declared:
inactive=1y (to prevent caches from being evicted because no one touches them)
proxy_ignore_headers X-Accel-Expires Expires Cache-Control; (to always cache results)
proxy_cache_lock on; (if I receive two requests, one will access the backend, the other will retrieve files from the cache)
proxy_cache_valid 200 60d; (keep 200 results for 60 days)
However, I get many misses in reality
Is the source of the issue is my configuration? If so, how can I fix it?

NGINX (Reverse Proxy - Bad Gateway

I searched a lot and tried a lot but on this point I dont get it... so here is my question:
I have a newly setup Proxmox and I wanna run a nginx reverse proxy and some VMs behind it. It is the first time with nginx and reverse proxy for me. I only used Apache before and never a reverse proxy.
So my reverse proxy has basicly three files: headers.conf, ssl.conf and my.domain.com.conf.
In headers.conf is the following:
#
# Add headers to serve security related headers
#
# HSTS (ngx_http_headers_module is required)
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload;" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Robots-Tag none always;
add_header X-Download-Options noopen always;
add_header X-Permitted-Cross-Domain-Policies none always;
add_header Referrer-Policy no-referrer always;
add_header X-Frame-Options "SAMEORIGIN" always;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
#prox headers
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
Then there is the ssl.conf:
GNU nano 5.4 /etc/nginx/snippets/ssl.conf
#
# Configure SSL
#
# Diffie-Hellman parameter for DHE ciphersuites, recommended 4096 bits
ssl_dhparam /etc/nginx/dhparams/dhparams.pem;
# Not using TLSv1 will break:
# Android <= 4.4.40 IE <= 10 IE mobile <=10
# Removing TLSv1.1 breaks nothing else!
ssl_protocols TLSv1.2 TLSv1.3;
# SSL ciphers: RSA + ECDSA
# Two certificate types (ECDSA, RSA) are needed.
ssl_ciphers 'TLS-CHACHA20-POLY1305-SHA256:TLS-AES-256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-A>
# Use multiple curves.
ssl_ecdh_curve secp521r1:secp384r1;
# Server should determine the ciphers, not the client
ssl_prefer_server_ciphers on;
# SSL session handling
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
# SSL stapling has to be done seperately, becuase it will not work with self signed certs
# OCSP Stapling fetch OCSP records from URL in ssl_certificate and cache them
ssl_stapling on;
ssl_stapling_verify on;
# DNS resolver
resolver 192.168.xxx.xx;
and the my.domain.com.conf:
server {
listen 80;
listen [::]:80;
server_name my.domain.com;
location ~ \.* {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name my.domain.com;
# SSL configuration
# RSA certificates
ssl_certificate /etc/letsencrypt/my.domain.com/rsa/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/my.domain.com/rsa/key.pem;
# ECC certificates
ssl_certificate /etc/letsencrypt/my.domain.com/ecc/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/my.domain.com/ecc/key.pem;
# This should be ca.pem (certificate with the additional intermediate certificate)
# See here: https://certbot.eff.org/docs/using.html
# ECC
ssl_trusted_certificate /etc/letsencrypt/my.domain.com/ecc/ca.pem;
# Include SSL configuration
include /etc/nginx/snippets/ssl.conf;
# Include headers
include /etc/nginx/snippets/headers.conf;
# Set the access log location
access_log /var/log/nginx/my.domain.access.log;
location ~ \.* {
proxy_pass http://192.168.xxx.xxx:80;
proxy_read_timeout 90;
proxy_redirect http://192.168.xxx.xxx:80 https://my.domain.com;
}
}
That is the reverse proxy side.
The VM has nginx as well and the following file, my.domain.com.conf:
upstream php-handler {
server unix:/run/php/php7.4-fpm.sock;
}
server {
listen 80;
listen [::]:80;
server_name my.domain.com;
#root /var/www;
# location / {
# return 301 https://$host$request_uri;
# }
# Path to the root of your installation
root /var/www/nextcloud;
# Specify how to handle directories -- specifying `/index.php$request_uri`
# here as the fallback means that Nginx always exhibits the desired behaviour
# when a client requests a path that corresponds to a directory that exists
# on the server. In particular, if that directory contains an index.php file,
# that file is correctly served; if it doesn't, then the request is passed to
# the front-end controller. This consistent behaviour means that we don't need
# to specify custom rules for certain paths (e.g. images and other assets,
# `/updater`, `/ocm-provider`, `/ocs-provider`), and thus
# `try_files $uri $uri/ /index.php$request_uri`
# always provides the desired behaviour.
index index.php index.html /index.php$request_uri;
# set max upload size and increase upload timeout:
client_max_body_size 512M;
client_body_timeout 300s;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Rule borrowed from `.htaccess` to handle Microsoft DAV clients
location = / {
if ( $http_user_agent ~ ^DavClnt ) {
return 302 /remote.php/webdav/$is_args$args;
}
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Make a regex exception for `/.well-known` so that clients can still
# access it despite the existence of the regex rule
# `location ~ /(\.|autotest|...)` which would otherwise handle requests
# for `/.well-known`.
location ^~ /.well-known {
# The rules in this block are an adaptation of the rules
# in `.htaccess` that concern `/.well-known`.
location = /.well-known/carddav { return 301 /remote.php/dav/; }
location = /.well-known/caldav { return 301 /remote.php/dav/; }
#location /.well-known/acme-challenge { try_files $uri $uri/ =404; }
#location /.well-known/pki-validation { try_files $uri $uri/ =404; }
# Let Nextcloud's API for `/.well-known` URIs handle all other
# requests by passing them to the front-end controller.
return 301 /index.php$request_uri;
}
# Rules borrowed from `.htaccess` to hide certain paths from clients
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; }
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; }
# Ensure this block, which passes PHP files to the PHP process, is above the blocks
# which handle static assets (as seen below). If this block is not declared first,
# then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
# to the URI, resulting in a HTTP 500 error response.
location ~ \.php(?:$|/) {
# Required for legacy support
rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice
fastcgi_param front_controller_active true; # Enable pretty urls
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_read_timeout 600;
fastcgi_send_timeout 600;
fastcgi_connect_timeout 600;
fastcgi_param PHP_VALUE "upload_max_filesize = 10G
post_max_size = 10G
max_execution_time = 3600
output_buffering = off";
}
location ~ \.(?:css|js|svg|gif|png|jpg|ico|wasm|tflite)$ {
try_files $uri /index.php$request_uri;
expires 6M; # Cache-Control policy borrowed from `.htaccess`
access_log off; # Optional: Don't log access to assets
location ~ \.wasm$ {
default_type application/wasm;
}
}
location ~ \.woff2?$ {
try_files $uri /index.php$request_uri;
expires 7d; # Cache-Control policy borrowed from `.htaccess`
access_log off; # Optional: Don't log access to assets
}
# Rule borrowed from `.htaccess`
location /remote {
return 301 /remote.php$request_uri;
}
}
I did all that from reading different tutorials and the manuals itself. But I totally don't get my mistake.... is there anything obvies you can see? I feel very happy for any hint you guys have for me!
Thank you a lot and good christmas time :)
This is the output:
root#HL-Reverse-Proxy:/var/www# telnet 192.168.178.100 80
Trying 192.168.178.100...
Connected to 192.168.178.100.
Escape character is '^]'.
GET /
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.21.4</center>
</body>
</html>
Connection closed by foreign host.
So connection is possible.

CORS - missing allow origin header - Nginx on Centos7

I had to re-setup some projects on my local using Centos 7 on a vagrant box. After the setup I am getting a CORS error and I am not able to figure out why.
This is nginx.conf file:
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 4096;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
listen [::]:80;
server_name _;
root /usr/share/nginx/html;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
error_page 404 /404.html;
location = /404.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}
And this is my project conf file. I don't think the name of the .conf file matters, right?
server {
listen 80;
server_name authentication.service.local ;
root /var/www/authentication-api/public;
index index.php;
gzip on;
gzip_http_version 1.1;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component;
gzip_vary on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
add_header X-Frame-Options "SAMEORIGIN";
location /proxy.html {
root /var/www/authentication-api/public;
}
location /xdomain.min.js {
root /var/www/authentication-api/public;
}
location / {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS, PATCH';
add_header 'Access-Control-Allow-Headers' 'DNT,Authorization,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
try_files $uri $uri/ /index.php?$query_string;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
I am able to ping the project correctly using ping authentication.service.local
But when I try to hit an endpoint using a browser (eg: login, which was working before), I get an error Failed to load resource: Origin http://localhost:7000 is not allowed by Access-Control-Allow-Origin
I thought adding add_header 'Access-Control-Allow-Origin' '*'; should resolve this issue, but I'm trying to understand why it doesn't?
Also the nginx error.log shows
2021/09/15 17:50:17 [error] 3290#3290: *8 open() "/usr/share/nginx/html/token/user" failed (2: No such file or directory), client: 10.0.5.1, server: _, request: "POST /token/user HTTP/1.1", host: "authentication.service.local", referrer: "http://localhost:7000/"
Since my /var/www folder points to my projects folder, is "/usr/share/nginx/html/token/user" incorrect?
Thanks for your help.
You're sending a POST request, but only set the Access-Control-Allow-Origin header for the OPTIONS method.

nginx add_header not working

I am having an intriguing problem where whenever I use add_header in my virtual host configuration on an ubuntu server running nginx with PHP and php-fpm it simply doesn't work and I have no idea what I am doing wrong. Here is my config file:
server {
listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/example.com/webroot/;
index index.html index.htm index.php;
# Make site accessible from http://www.example.com/
server_name www.example.com;
# max request size
client_max_body_size 20m;
# enable gzip compression
gzip on;
gzip_static on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header PS 1
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ /index.php?$query_string;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location ~* \.(css|js|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|odb|odc|odf|odg|odp|ods|odt|ogg|ogv|$
# 1 year -> 31536000
expires 500s;
access_log off;
log_not_found off;
add_header Pragma public;
add_header Cache-Control "max-age=31536000, public";
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
#fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/example.sock;
fastcgi_index index.php?$query_string;
include fastcgi_params;
# instead I want to get the value from Origin request header
}
# Deny access to hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
error_page 403 /403/;
}
server {
listen 80;
server_name example.com;
rewrite ^ http://www.example.com$request_uri? permanent;
}
I've tried adding the headers to the other location sections but the result is the same.
Any help appreciated!!
There were two issues for me.
One is that nginx only processes the last add_header it spots down a tree. So if you have an add_header in the server context, then another in the location nested context, it will only process the add_header directive inside the location context. Only the deepest context.
From the NGINX docs on add_header:
There could be several add_header directives. These directives are inherited from the previous level if and only if there are no add_header directives defined on the current level.
Second issue was that the location / {} block I had in place was actually sending nginx to the other location ~* (\.php)$ block (because it would repath all requests through index.php, and that actually makes nginx process this php block). So, my add_header directives inside the first location directive were useless, and it started working after I put all the directives I needed inside the php location directive.
Finally, here's my working configuration to allow CORS in the context of an MVC framework called Laravel (you could change this easily to fit any PHP framework that has index.php as a single entry point for all requests).
server {
root /path/to/app/public;
index index.php;
server_name test.dev;
# redirection to index.php
location / {
try_files $uri $uri/ /index.php?$query_string;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
# cors configuration
# whitelist of allowed domains, via a regular expression
# if ($http_origin ~* (http://localhost(:[0-9]+)?)) {
if ($http_origin ~* .*) { # yeah, for local development. tailor your regex as needed
set $cors "true";
}
# apparently, the following three if statements create a flag for "compound conditions"
if ($request_method = OPTIONS) {
set $cors "${cors}options";
}
if ($request_method = GET) {
set $cors "${cors}get";
}
if ($request_method = POST) {
set $cors "${cors}post";
}
# now process the flag
if ($cors = 'trueget') {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Credentials' 'true';
}
if ($cors = 'truepost') {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Credentials' 'true';
}
if ($cors = 'trueoptions') {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Max-Age' 1728000; # cache preflight value for 20 days
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken,Keep-Alive,X-Requested-With,If-Modified-Since';
add_header 'Content-Length' 0;
add_header 'Content-Type' 'text/plain charset=UTF-8';
return 204;
}
}
error_log /var/log/nginx/test.dev.error.log;
access_log /var/log/nginx/test.dev.access.log;
}
The gist for the above is at: https://gist.github.com/adityamenon/6753574
I had the issue of not getting the response header due to the response code not within the allowed range, unless you specify the "always" keyword after the header value.
From the official docs:
Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308. The value can contain variables.
When I test the above add_header settings with:
# nginx -t && service nginx reload
I get
nginx: [emerg] directive "add_header" is not terminated by ";" in
/etc/nginx/enabled-sites/example.com.conf:21
nginx: configuration file /etc/nginx/nginx.conf test failed
So the complain is reagarding this line:
add_header PS 1
missing the semi-colon (;)
To test the headers I like to use
# curl -I http://example.com
According to the ngx_http_headers_module manual
syntax: add_header name value;
default: —
context: http, server, location, if in location
I further tried
add_header X-test-A 1;
add_header "X-test-B" "2";
add_header 'X-test-C' '3';
in the context of http, server and location, but it only showed up in the server context.
Firstly, let me say that after looking around the web, I found this answer popping up everywhere:
location ~* \.(eot|ttf|woff|woff2)$ {
add_header Access-Control-Allow-Origin *;
}
However, I have decided to answer this question with a separate answer as I only managed to get this particular solution working after putting in about ten more hours looking for a solution.
It seems that Nginx doesn't define any [correct] font MIME types by default. By following this tuorial I found I could add the following:
application/x-font-ttf ttc ttf;
application/x-font-otf otf;
application/font-woff woff;
application/font-woff2 woff2;
application/vnd.ms-fontobject eot;
To my etc/nginx/mime.types file. As stated, the above solution then worked. Obviously, this answer is aimed at sharing fonts but it's worth noting that the MIME types may not be defined in Nginx.
Evidently the add_header inheritance quirk/gotcha applies to the upstream layer as well.
I had a script pre-authorizing requests meant for another service, and was therefore returning all of the headers from the other service.
Once I started adding an 'Access-Control-Allow-Origin' entry along with these relayed headers, the browser would actually get the entry and allow the request.
I don't think it works properly by reloading ==> nginx -s reload
When I used add_header and then reloaded, nothing changed in response.
But when I made a deliberate error and saw a 404 error on the client side,
and then fixed my deliberate error and reloaded again, Add_header worked.
What does your nginx error log say?
Do you know which add_header lines are breaking the configuration? If not, comment them all out then enable them 1 by 1, reloading nginx to see which one(s) is/are the problem. I would begin by commenting out the block:
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header PS 1
The problem could be that you're setting headers not supported by the core httpHeaders module. Installing the NginxHttpHeadersMoreModule may be helpful.
Also, try replacing the two add_header lines int the location ~* \... with the following:
add_header Pragma '';
add_header Cache-Control 'public, max-age=31536000'
Is there a reason you have the gzip configuration here and not in your global nginx.conf?
It turns out that trying to update nginx to the newest version was causing this. I had tried previously to reinstall which seemed to reinstall it correctly but actually Ubuntu wasn't properly removing nginx. So all I had to do is reinstall Ubuntu server and install everything anew using just the standard ubuntu repositories.

Resources