Map to custom domain still displays *.scapp.io - meteor

I just followed the procedure to map a single domain to my custom domain:
Create the domain mydomain.com in ORGS
Create the route myapp.mydomain.com in SPACES
Map my app to both myapp.scapp.io and myapp.mydomain.com in SPACES
Add a CNAME DNS entry for mydomain.com with the name myapp and target mapapp.scapp.io (I'm using amazon route 53)
The mapping works, I can access myapp with myapp.mydomain.com, but the address still shows myapp.scapp.io
How can I make the mapping transparent and display myapp.mydomain.com in the address bar ?

Update:
I managed to make it work on amazon route 53:
Create the domain mydomain.com in ORGS
Create the route myapp.mydomain.com in SPACES
Map my app to both myapp.scapp.io and myapp.mydomain.com in SPACES
Add a CNAME DNS entry for mydomain.com with the name myapp-cname and target myapp.scapp.io
Add a CNAME DNS entry for mydomain.com with the name myapp, enabling Alias and target myapp-cname.mydomain.com
It shows myapp.mydomain.com in the address bar as expected, but I doubt this is the right way to do it.

#UPDATE The issue was coming from my meteor application that did not properly force requests to https. I was using the force-ssl package, but as said in the README:
Meteor bundles (i.e. meteor build) do not include an HTTPS server or certificate. A proxy server that terminates SSL in front of a Meteor bundle must set the x-forwarded-proto or forwarded (RFC 7239) header for this package to work.
Therefore I am using a staticfile application with a custom nginx.conf.
I created a staticfile application using the staticfile-buildpack, add my private domains to the routes in the manifest.yml, and set the env variable FORCE_HTTPS to true:
applications:
- name: my-nginx
memory: 128M
instances: 1
buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git
routes:
- route: 'app1.mydomain.com'
- route: 'app2.mydomain.com'
- route: 'app1.subdomain.mydomain.com'
- route: 'app2.subdomain.mydomain.com'
- route: 'app3.mydomain.com'
env:
FORCE_HTTPS: true
The next step was to create the custom nginx.conf with a server{...} block for each of my private domains, with a proxy_pass on the corresponding scapp.io domain (here with two private domains):
worker_processes 1;
daemon off;
error_log <%= ENV["APP_ROOT"] %>/nginx/logs/error.log;
events { worker_connections 1024; }
http {
charset utf-8;
log_format cloudfoundry '$http_x_forwarded_for - $http_referer - [$time_local] "$request" $status $body_bytes_sent';
access_log <%= ENV["APP_ROOT"] %>/nginx/logs/access.log cloudfoundry;
default_type application/octet-stream;
include mime.types;
sendfile on;
gzip on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gunzip on;
gzip_static always;
gzip_types text/plain text/css text/js text/xml text/javascript application/javascript application/x-javascript application/json application/xml application/xml+rss;
gzip_vary on;
tcp_nopush on;
keepalive_timeout 30;
port_in_redirect off; # Ensure that redirects don't include the internal container PORT - <%= ENV["PORT"] %>
server_tokens off;
server {
listen <%= ENV["PORT"] %>;
server_name app1.mydomain.com;
# Redirects to https if the environment variable "FORCE_HTTPS" is set to true
<% if ENV["FORCE_HTTPS"] %>
if ($http_x_forwarded_proto != "https") {
return 301 https://$host$request_uri;
}
<% end %>
location / {
proxy_pass https://app1.scapp.io/;
}
}
server {
listen <%= ENV["PORT"] %>;
server_name app2.mydomain.com;
<% if ENV["FORCE_HTTPS"] %>
if ($http_x_forwarded_proto != "https") {
return 301 https://$host$request_uri;
}
<% end %>
location / {
proxy_pass http://app2.scapp.io/;
}
}
}
The next steps are the usual ones:
Create a domain mydomain.com in the right ORG and each of my private routes in the correct SPACE.
Create SSL certificates for each of my private domains in the swisscomdev console.
Create CNAME DNS entries for mydomain.com with the name * and target my-nginx.scapp.io (the scapp.io route automatically assigned by swisscom for my staticfile application).
Lastly, I pushed the application with cf push and it works like a charm !

Related

Need help in simulating (and blocking) HTTP_HOST spoofing attacks

I have an nginx reverse proxy serving multiple small web services. Each of the servers has different domain names, and are individually protected with SSL using Certbot. The installation for these was pretty standard as provided by Ubuntu 20.04.
I have a default server block to catch requests and return a 444 where the hostname does not match one of my server names. However about 3-5 times per day, I have a request getting through to my first server (happens to be Django), which then throws the "Not in ALLOWED_HOSTS" message. Since this is the first server block, I'm assuming something in my ruleset doesn't match any of the blocks and the request is sent upstream to serverA
Since the failure is rare, and in order to simulate this HOST_NAME spoofing attack, I have tried to use curl as well as using netcat with raw text files to try and mimic this situation, but I am not able to get past my nginx, i.e. I get a 444 back as expected.
Can you help me 1) simulate an attack with the right tools and 2) Help identify how to fix it? I'm assuming since this is reaching my server, it is coming over https?
My sanitized sudo nginx -T, and an example of an attack are shown below.
ubuntu#ip-A.B.C.D:/etc/nginx/conf.d$ sudo nginx -T
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
# SSL Settings
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
# Logging Settings
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip Settings
gzip on;
# Virtual Host Configs
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
# configuration file /etc/nginx/modules-enabled/50-mod-http-image-filter.conf:
load_module modules/ngx_http_image_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-http-xslt-filter.conf:
load_module modules/ngx_http_xslt_filter_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-mail.conf:
load_module modules/ngx_mail_module.so;
# configuration file /etc/nginx/modules-enabled/50-mod-stream.conf:
load_module modules/ngx_stream_module.so;
# configuration file /etc/nginx/mime.types:
types {
text/html html htm shtml;
text/css css;
# Many more here.. removed to shorten list
video/x-msvideo avi;
}
# configuration file /etc/nginx/conf.d/serverA.conf:
upstream serverA {
server 127.0.0.1:8000;
keepalive 256;
}
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverA;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
listen 443 ssl; # managed by Certbot
ssl_certificate ...; # managed by Certbot
ssl_certificate_key ...; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverA.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name serverA.com www.serverA.com;
return 404; # managed by Certbot
}
# configuration file /etc/letsencrypt/options-ssl-nginx.conf:
# This file contains important security parameters. If you modify this file
# manually, Certbot will be unable to automatically provide future security
# updates. Instead, Certbot will print and log an error message with a path to
# the up-to-date file that you will need to refer to when manually updating
# this file.
ssl_session_cache shared:le_nginx_SSL:10m;
ssl_session_timeout 1440m;
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA";
# configuration file /etc/nginx/conf.d/serverB.conf:
upstream serverB {
server 127.0.0.1:8002;
keepalive 256;
}
server {
server_name serverB.com fsn.serverB.com www.serverB.com;
client_max_body_size 10M;
location / {
proxy_pass http://serverB;
... as above ...
}
listen 443 ssl; # managed by Certbot
... as above ...
}
server {
if ($host = serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = www.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = fsn.serverB.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name serverB.com fsn.serverB.com www.serverB.com;
listen 80;
return 404; # managed by Certbot
}
# Another similar serverC, serverD etc.
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# server_name "";
return 444;
}
Request data from a request that successfully gets past nginx to reach serverA (Django), where it throws an error: (Note that the path will 404, and HTTP_HOST headers are not my server names. More often, the HTTP_HOST comes in with my static IP address as well.
Exception Type: DisallowedHost at /movie/bCZgaGBj
Exception Value: Invalid HTTP_HOST header: 'www.tvmao.com'. You may need to add 'www.tvmao.com' to ALLOWED_HOSTS.
Request information:
USER: [unable to retrieve the current user]
GET: No GET data
POST: No POST data
FILES: No FILES data
COOKIES: No cookie data
META:
HTTP_ACCEPT = '*/*'
HTTP_ACCEPT_LANGUAGE = 'zh-cn'
HTTP_CACHE_CONTROL = 'no-cache'
HTTP_CONNECTION = 'Upgrade'
HTTP_HOST = 'www.tvmao.com'
HTTP_REFERER = '/movie/bCZgaGBj'
HTTP_USER_AGENT = 'Mozilla/5.0 (iPhone; CPU iPhone OS 13_2_3 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.3 Mobile/15E148 Safari/604.1'
HTTP_X_FORWARDED_FOR = '27.124.12.23'
HTTP_X_REAL_IP = '27.124.12.23'
PATH_INFO = '/movie/bCZgaGBj'
QUERY_STRING = ''
REMOTE_ADDR = '127.0.0.1'
REMOTE_HOST = '127.0.0.1'
REMOTE_PORT = 44058
REQUEST_METHOD = 'GET'
SCRIPT_NAME = ''
SERVER_NAME = '127.0.0.1'
SERVER_PORT = '8000'
wsgi.multiprocess = True
wsgi.multithread = True
Here's how I've tried to simulate the attack using raw http requests and netcat:
me#linuxmachine:~$ cat raw.http
GET /dashboard/ HTTP/1.1
Host: serverA.com
Host: test.com
Connection: close
me#linuxmachine:~$ cat raw.http | nc A.B.C.D 80
HTTP/1.1 400 Bad Request
Server: nginx/1.18.0 (Ubuntu)
Date: Fri, 27 Jan 2023 15:05:13 GMT
Content-Type: text/html
Content-Length: 166
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx/1.18.0 (Ubuntu)</center>
</body>
</html>
If I send my correct serverA.com as the host header, I get a 301 (redirecting to https).
If I send an incorrect host header (e.g. test.com) I get an empty response (expected).
If I send two host headers (correct and incorrect) I get a 400 bad request
If I send the correct host, but to port 443, I get a 400 plain HTTP sent to HTTPS port...
How do I simulate a request to get past nginx to my upstream serverA like the bots do? And how do I block it with nginx?
Thanks!
There is something magical about asking SO. The process of writing makes the answer appear :)
To my first question above, of simulating the spoof, I was able to just use curl in the following way:
me#linuxmachine:~$ curl -H "Host: A.B.C.D" https://example.com
I'm pretty sure I've tried this before but not sure why I didn't try this exact spell (perhaps I was sending a different header, like Http-Host: or something)
With this call, I was able to trigger the error as before, which made it easy to test the nginx configuration and answer the second question.
It was clear that the spoof was coming on 443, which led me to this very informative post on StackExchange
This also explained why we can't just listen 443 and respond with a 444 without first having traded SSL certificates due to the way SSL works.
The three options suggested (happrox, fake cert, and the if($host ...) directive might all work, but the simplest I think is the last one. Since this if( ) is not within the location context, I believe this to be ok.
My new serverA block looks like this:
server {
server_name serverA.com www.serverA.com;
client_max_body_size 10M;
## This fixes it
if ( $http_host !~* ^(serverA\.com|www\.serverA\.com)$ ) {
return 444;
}
## and it's not inside the location context...
location / {
proxy_pass http://upstream;
proxy_http_version 1.1;
...

Keeping nginx location sub-directories & sub-files under the origional location directory

I have a location directive
location /nextcloud {
proxy_pass http://nextcloud_container:80/;
}
When I visit http://192.168.40.231:408/nextcloud, the page loads fine because it passing through to my nextcloud docker address http://192.168.1.2:80 (just an example container address).
The problem is that it's missing all css files because the requests for those files are going to http://192.168.40.231:408/style.css instead of http://192.168.1.2:80/style.css (just an example container address). This works the same for redirects to other pages example) /page2.html
So what I would like to happen is:
When I visit /nextcloud, all files and page redirects that the nextcloud container does, should append to the end of the /nextcloud, example) nextcloud/style.css. So that the style.css file is passed through to the container instead of the host computer address.
There are other location directives that I would like to do the same thing
nginx.conf
user nginx;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
proxy_buffering off;
proxy_cache off;
proxy_http_version 1.1;
server {
listen 80;
server_name nginx_container;
location /nextcloud {
proxy_pass http://nextcloud_container:80/;
}
location /syncthing {
proxy_pass http://syncthing_container:8384/;
}
}
}
docker-compose.yml
version: "3"
services:
nginx:
image: nginx
build: ./nginx
container_name: nginx_container
ports:
- 408:80
networks:
- primary_network
nextcloud:
container_name: nextcloud_container
image: nextcloud
build: ./nextcloud
networks:
- primary_network
syncthing:
container_name: syncthing_container
image: syncthing/syncthing
build: ./syncthing
ports:
- 22000:22000/tcp # TCP file transfers
- 22000:22000/udp # QUIC file transfers
- 21027:21027/udp # Receive local discovery broadcasts
restart: unless-stopped
networks:
- primary_network
networks:
primary_network:
ipam:
driver: default
This is a similar question I found, incase this one is easier to understand How can I use Nginx as a proxy for Nextcloud when the URL subdirectories are changing?
I figured out a solution to this, you just have to map each service to a new server block with a unique port. Here's some code to better show what I mean.
user nginx;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
proxy_buffering off;
proxy_cache off;
proxy_http_version 1.1;
# Redirects http traffic to https blocks below
server {
listen 80;
server_name xx.xx.xx.xx;
return 302 https://$server_name$request_uri;
}
# SSL Configurations
include /etc/nginx/nginx_ssl.conf;
# NextCloud
server {
listen 443 ssl;
server_name xx.xx.xx.xx;
# Reverse proxy to syncthing container
location / {
proxy_pass http://xx.xx.xx.xx:1112/;
}
# Redirects to unique syncthing port
location /syncthing {
return 301 https://xx.xx.xx.xx:444/;
}
}
# Syncthing
server {
listen 444 ssl;
server_name xx.xx.xx.xx;
# Reverse proxy to syncthing container
location / {
proxy_pass https://xx.xx.xx.xx:1114/;
}
# Redirects to unique nextcloud port
location /nextcloud {
return 301 https://xx.xx.xx.xx:443/;
}
}
}

NGINX - Directive to Allow iPhone's IP address for accessing wordpress login page (mysite.com/wp-login.php), and Deny all other IP addresses

MY ENVIRONMENT:
I am running a LEMP server which is working and running wordpress quite properly. As of now, I have my wordpress web login (www.mysite/wp-login.php) blacklisted by all IP addresses EXCEPT any IP on my LAN with the following directive:
server {
# Allow LAN only on wp-login page (www.mysite.com/wp-login.php)
location ~ /wp-login.php {
allow 192.168.1.0/24;
deny all;
}
This directive sucessfully blocks all internet traffic to "mywebsite.com/wp-login.php", which is the wordpress admin login page.
In other words, with this directive set, I can access the wordpress login page anywhere on my internal LAN, but the directive denys any outside internet traffic from seeing the "mywebsite.com/wp-login.php" page. GREAT!
WHAT I WANT TO DO,
is to whitelist the IP address of my phone, so that I can access the wordpress login page from my phone's IP address, while still blocking any other outside internet traffic. To do so, I go to www.whatsmyip.org on my phone, copy the ip address that it gives me, then modify the previous directive to look like the following:
server {
# Allow LAN and CellPhone access to to wp-login page (www.mysite.com/wp-login.php)
location ~ /wp-login.php {
allow 77.232.28.46; # my phones ip address as shown on whatsmyip.org
allow 192.168.1.0/24;
deny all;
}
HOWEVER,
after reloading nginx, I still cannot access the wp-login (wordpress login) page from my phone.
MY QUESTION IS:
Using NGINX, How can I properly whitelist my phones IP address, while blacklisting everything else access to the wordpress login page locate at www.mysite.com/wp-login.php ?
FOR REFERENCE:
Below is my NGINX.CONF file:
# This is the /etc/nginx/nginx.conf file for Danrancan's LEMP server
#
user www-data;
worker_processes 4;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
load_module /usr/share/nginx/modules/ngx_http_modsecurity_module.so;
events {
worker_connections 1024;
# multi_accept on;
}
http {
##
# Mod Security
##
modsecurity on;
#modsecurity off;
modsecurity_rules_file /etc/nginx/modsec/main.conf;
##
# Basic Settings
##
client_max_body_size 512M;
fastcgi_read_timeout 300;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 4096;
server_tokens off;
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
# Create a custom Nginx log format called netdata that includes information about request_time, and upstream_response_time, measured in seconds with millisecond resolution.
log_format netdata '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'$request_length $request_time $upstream_response_time '
'"$http_referer" "$http_user_agent"';
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_session_cache shared:SSL:10m; #SSL session cache
ssl_session_timeout 1h;
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 5;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component
text/javascript
text/xml;
# Extra Http Header response to determine whether a request is being served from the cache
#add_header Fastcgi-Cache $upstream_cache_status;
##
# Virtual Host Configs
##
upstream local_php {
server unix:/run/php/php7.4-fpm.sock;
}
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
include /etc/nginx/perfect-forward-secrecy.conf;
##
# Harden nginx against DDOS. #noted from www.pestmeester.nl
##
client_header_timeout 10;
# For good security, set client_body_timeout to 10. For uploading large files, set to higher.
client_body_timeout 10;
keepalive_timeout 10;
send_timeout 10;
}
and my VIRTUAL HOST CONFIG:
# Danrancan's Virtual host config for /etc/nginx/sites-available/mysite.com.conf
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name mysite.com www.mysite.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
###
# SSL (From Mozilla Config Generator: Modern Configuration)
###
# Add Strict Transport Security Response Header with "always Paramater", to help prevent MITM attacks.
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;
## Prevent click jacking by adding an X-Frame header
# Add X-Frame-Options header to nginx with the following line:
add_header x-frame-options "SAMEORIGIN" always;
# Add a content security policy header
add_header Content-Security-Policy "frame-ancestors 'self';";
# Secure MIME Types with X-Content-Type-Options. Below line adds the X-Fram-Options header in Nginx.
add_header X-Content-Type-Options nosniff;
# Enable X-XSS-Protection header in Nginx
add_header X-XSS-Protection "1; mode=block";
add_header Referrer-Policy "strict-origin";
add_header Permissions-Policy "geolocation=(),midi=(),sync-xhr=(),microphone=(),camera=(),magnetometer=(),gyroscope=(),fullscreen=(self),payment=()";
# Path to signerd certificate + Intermediate certificates
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # Managed by admin
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # Managed by admin
# Perfect Forward Secrecy Diffie-Hellman 4096 parameters
ssl_dhparam /etc/ssl/private/dhparams4096.pem; # Managed by admin
# Include "perfect-forward-secrecy.conf" file in this virtual host. NOTE: No need to do this, as its already included in the nginx.conf file, so you should comment this out.
#include /etc/nginx/perfect-forward-secrecy.conf; # Managed by admin
# Modern SSL configuration with OCSP stapling turned on
#ssl_protocols TLSv1.3; # commented out because its already in the nginx.conf file
ssl_prefer_server_ciphers on;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
# OCSP stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4;
# Verify chain of trust of OCSP response using Root CA and Intermediate certs
ssl_trusted_certificate /etc/letsencrypt/live/mysite.com/chain.pem; # Managed by admin
server_name mysite.com www.mysite.com;
root /var/www/mysite.com;
# Error & Access Logs
#error_log /var/www/mysite.com.logs/error.log error;
#access_log /var/www/mysite.com.logs/access.log;
access_log /var/log/nginx/mysite.com.access.log netdata;
error_log /var/log/nginx/mysite.com.error.log warn;
# This should be in your http block and if it is, it's not needed here.
index index.php index.html index.htm;
# Only allow access of /admin via internal IP
location ^~ /admin {
allow 192.168.1.0/24;
deny all;
error_page 403 =444;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
}
# Allow local only to wp-login page
location ~ /wp-login.php {
allow 192.168.1.0/24;
deny all;
error_page 403 =444;
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
}
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ /.well-known {
allow all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Cache Static Files For As Long As Possible
location ~*\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off;
log_not_found off;
expires max;
}
# Security Settings For Better Privacy Deny Hidden Files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
# Disallow PHP In Upload Folder
location /wp-content/uploads/ {
location ~ \.php$ {
deny all;
}
}
# Pass PHP scripts to FastCGI server
location ~ \.php$ {
include snippets/fastcgi-php.conf;
# With php-fpm (or other unix sockets):
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# Below was Added from recommended by pestmeester.nl
fastcgi_intercept_errors on;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
}
location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires max;
log_not_found off;
}
}
What you are doing should work.
Since it is not, it seems that for some reason nginx doesn't see your phone IP as a match to what you have configured.
It sounds like your WP box is running on a private network (because of the 192.168.1.0/ IP addresses you mentioned).
When you connect to the WP box from the internet, is it going through a router with port forwarding/NAT?
First thing I would do is just tail your nginx access log (access_log /var/log/nginx/access.log;) when trying to access with your iphone and see what is reported.
If the request is coming through a proxy/reverse-proxy you may need to make sure the proxy is adding X-Forwarded-For to pass along the remote (iphone) ip address. The request to nginx/wp would be coming from the proxy IP and there would be a header X-Forwarded-For added to the request containing the original remote address.
When nginx is used this way you need to use nginx's realip module...something like:
real_ip_header X-Forwarded-For;
set_real_ip_from 192.168.1.1; # proxy ip
http://nginx.org/en/docs/http/ngx_http_realip_module.html

Get the certificate and key file names stored on Heroku to set up SSL on Nginx server

I wanted to add the certificate and key to the Nginx server that my application is served on and hosted by Heroku. This is what I currently have in my Nginx config file. Does proxying the SSL server work for this instead and keeps the server secure? If not then how am I supposed to get file names for the .pem and .key files that I uploaded to Heroku for my specific application?
nginx.conf.erb
daemon off;
#Heroku dynos have at least 4 cores.
worker_processes <%= ENV['NGINX_WORKERS'] || 4 %>;
events {
use epoll;
accept_mutex on;
worker_connections <%= ENV['NGINX_WORKER_CONNECTIONS'] || 1024 %>;
}
http {
server_tokens off;
log_format l2met 'measure#nginx.service=$request_time request_id=$http_x_request_id';
access_log <%= ENV['NGINX_ACCESS_LOG_PATH'] || 'logs/nginx/access.log' %> l2met;
error_log <%= ENV['NGINX_ERROR_LOG_PATH'] || 'logs/nginx/error.log' %>;
include mime.types;
default_type text/html;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
#Must read the body in 65 seconds.
keepalive_timeout 65;
# handle SNI
proxy_ssl_server_name on;
upstream app_server {
server unix:/tmp/nginx.socket fail_timeout=0;
}
server {
listen <%= ENV["PORT"] %>;
server_name _;
# Define the specified charset to the “Content-Type” response header field
charset utf-8;
location / {
proxy_ssl_name <%= ENV["HEROKU_DOMAIN"] %>;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://app_server;
client_max_body_size 5M;
}
location /static {
alias /app/flask_app/static;
}
}
}
If you create SSL certificate from CloudFlare, you can't access through Heroku CLI, but you can download it through CloudFlare.
Please check if you have routed your domain web through Configure-Cloudflare-and-Heroku-over-HTTPS.
Download the SSL Cert via CloudFlare.
Setup SSL cert for Nginx Setup SSL Cert.
Hope it helps.
EDIT
Put SSL cert .key and .pem into same folder with nginx.conf.erb. I.e. domain_name.key & domain_name.pem
Deploy to Heroku.
Use config like this:
ssl_certificate domain_name.pem;
ssl_certificate_key domain_name.key;

Wordpress permalinks with 404 on nginx with gunicorn

I have Wordpress running on nginx which also runs gunicorn (to run django). Wordpress should be accessed on the subfolder www.mySite.de/blog/. The main page on this URL can be accessed, but when I open a link to a page (e.g. a page on www.mySite.de/blog/testpage ) then I get 404 errors.
My nginx configuration is as follows:
nginx.conf
#user nobody;
user nginx nginx;
worker_processes 4;
pid /run/nginx.pid;
error_log /var/log/nginx/error.log debug;
#error_log logs/error.log notice;
#error_log logs/error.log info;
events {
worker_connections 1024;
accept_mutex on; # "on" if nginx worker_processes > 1
# use epoll; # enable for Linux 2.6+
# use kqueue; # enable for FreeBSD, OSX
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log;
sendfile on;
tcp_nopush on;
tcp_nodelay off;
#keepalive_timeout 0;
keepalive_timeout 65;
types_hash_max_size 2048;
# Gzip Settings
gzip on;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 500;
gzip_disable "MSIE [1-6]\.";
gzip_types text/plain text/xml text/css
text/comma-separated-values
text/javascript application/x-javascript
application/atom+xml;
# Virtual Host Configs
include /etc/nginx/sites-enabled/*;
}
production.conf (imported from 'sites-enabled' folder)
upstream production_nginx {# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# for UNIX domain socket setups:
server unix:/home/mySite/production/run/gunicorn.sock fail_timeout=0;
}
upstream production_php {
server unix:/var/run/php5-fpm.sock;
}
server {
listen 80;
server_name mySite.de
www.mySite.de;
return 301 https://www.mySite.de$request_uri;
}
server {
listen 443;
server_name mySite.de;
return 301 https://www.mySite.de$request_uri;
}
server {
listen 443 ssl default_server;
client_max_body_size 4G;
server_name www.mySite.de;
ssl_certificate /etc/ssl/certs/www.mySite.de.crt;
ssl_certificate_key /etc/ssl/private/www.mySite.de.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
# ~2 seconds is often enough for most folks to parse HTML/CSS and
# retrieve needed images/icons/frames, connections are cheap in
# nginx so increasing this is generally safe...
keepalive_timeout 5;
access_log /var/log/nginx/production-access.log;
error_log /var/log/nginx/production-error.log;
location /static/ {
alias /home/mySite/production/htdocs/static/;
}
location /media/ {
alias /home/mySite/production/htdocs/media/;
}
location /blog/ {
alias /home/mySite/production/htdocs/blog/;
index index.php index.html index.htm;
# This is cool because no php is touched for static content.
# include the "?$args" part so non-default permalinks doesn't break when using query string
try_files $uri $uri/ /blog/index.php?q=$uri;
location ~ \.php$ {
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
fastcgi_split_path_info ^(/blog)(/.*)$;
fastcgi_intercept_errors on;
fastcgi_pass production_php;
fastcgi_index index.php;
include fastcgi.conf;
}
}
location /favicon.ico {
alias /home/mySite/production/htdocs/static/favicon.ico;
log_not_found off;
access_log off;
}
# path for static files
root /home/mySite/production/htdocs/;
location / {
# an HTTP header important enough to have its own Wikipedia entry:
# http://en.wikipedia.org/wiki/X-Forwarded-For
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS, this helps Rack
# set the proper protocol for doing redirects:
proxy_set_header X-Forwarded-Proto https;
# pass the Host: header from the client right along so redirects
# can be set properly within the Rack application
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
# set "proxy_buffering off" *only* for Rainbows! when doing
# Comet/long-poll stuff. It's also safe to set if you're
# using only serving fast clients with Unicorn + nginx.
# Otherwise you _want_ nginx to buffer responses to slow
# clients, really.
# proxy_buffering off;
# Try to serve static files from nginx, no point in making an
# *application* server like Unicorn/Rainbows! serve static files.
if (!-f $request_filename) {
proxy_pass http://production_nginx;
break;
}
}
# Error pages
error_page 500 502 503 504 /500.html;
location = /500.html {
root /home/mySite/production/htdocs;
}
pagespeed on;
pagespeed EnableFilters convert_meta_tags;
pagespeed EnableFilters collapse_whitespace; # Remove whitespace
pagespeed EnableFilters combine_javascript; # Merge JS files
pagespeed EnableFilters rewrite_javascript; # Minimize JS
pagespeed EnableFilters defer_javascript; # Load important JS first
pagespeed EnableFilters combine_css; # Merge CSS files
pagespeed EnableFilters rewrite_css; # Minimize CSS
pagespeed EnableFilters move_css_to_head; # Move CSS to head
pagespeed EnableFilters move_css_above_scripts; # Move CSS above JS
pagespeed EnableFilters prioritize_critical_css; # Load important CSS first
pagespeed EnableFilters fallback_rewrite_css_urls; # Fallback if CSS could not be parsed
pagespeed EnableFilters remove_comments; # Remove comments
pagespeed FileCachePath /var/ngx_pagespeed_cache; # Use tmpfs for best results.
# Ensure requests for pagespeed optimized resources go to the pagespeed
# handler and no extraneous headers get set.
location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" { add_header "" ""; }
location ~ "^/ngx_pagespeed_static/" { }
location ~ "^/ngx_pagespeed_beacon$" { }
location /ngx_pagespeed_statistics { allow 127.0.0.1; deny all; }
location /ngx_pagespeed_global_statistics { allow 127.0.0.1; deny all; }
location /ngx_pagespeed_message { allow 127.0.0.1; deny all; }
location /pagespeed_console { allow 127.0.0.1; deny all; }
location /mod_pagespeed_example {
location ~* \.(jpg|jpeg|gif|png|js|css)$ {
add_header Cache-Control "public, max-age=600";
}
}
}
nginx error log
2014/06/18 00:56:53 [error] 22133#0: *102248 open() "/home/mySite/production/htdocsindex.php" failed (2: No such file or directory), client: 92.227.135.241, server: www.mySite.de, request: "GET /blog/page1 HTTP/1.1", host: "www.mySite.de"
nginx access log
xx.xxx.135.241 - - [18/Jun/2014:01:35:02 +0200] "GET /blog/page1 HTTP/1.1" 404 200 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36"
Questions:
I don't understand, why my configuration makes nginx search for index.php in
/home/mySite/production/htdocsindex.php
instead of
/home/mySite/production/htdocs/blog/index.php
Why is there a slash missing between htdocs and index.php and/or why is the /blog part missing completely?
When i changed the trailing location block of production.conf from
location /blog/ {
alias /home/mySite/production/htdocs/blog/;
to this
location /blog {
alias /home/mySite/production/htdocs/blog/;
(removed the trailing slash) I did not get the nginx error page anymore, but then gunicorn and django kicked in an gave me a django 404 page. Why is django kicking in here?
Also using
try_files $uri $uri/ /blog/index.php?q=$uri&$args;
instead of
try_files $uri $uri/ /blog/index.php?q=$uri;
did not solve the issue. What is wrong up with my configuration and how I can get Wordpress to work with nginx, gunicorn and django?
Thanks a lot, Chris
This is correct rewrites if your wordpress in subfolder (for example blog subfolder).
location /blog/ {
index index.php index.html index.htm;
try_files $uri $uri/ /blog/index.php?$args;
}
You have root path it your config
root /home/mySite/production/htdocs/
so you can don't use alias in location (if your blog directory located in root folder).

Resources