Enable HTTP2 Server Push Nginx - nginx

I installed nginx with HTTP2 support. My nginx is behind application load balancer and SSL termination is happening at LB only.
I first enabled HTTP2_PUSH like below:
http2_push_preload on;
location /mx {
http2_push https://example.com/css/style.css;
http2_push https://example.com/js/main.js
}
But it did not work. My browser debugger network tab was showing initiator is "index", and nghttp also did not show anything.
Another approach I tried is:
http2_push_preload on;
location /mx {
add_header Link "<https://example.com/css/style.css>; rel=preload; as=style,<http2_push https://example.com/js/main.js>; rel=preload; as=script";
}
Next approach changed initiator from index to other in network tab, but nghttp tool still confirms that no server push is happening.

My understanding is the AWS Application Load Balancer is a level 7 (HTTP) load balancer and only supports HTTP/2 on the front end and not to your Nginx back end.
You can check this by adding $server_protocol to your Nginx log_format:
log_format my_log_format '$remote_addr - $remote_user [$time_local] $server_protocol "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';
access_log /usr/local/nginx/nginx-access.log my_log_format;
I’d guess the traffic is coming in as HTTP/1.1.
Even if AWS ALB does support HTTP/2 to the backend now, that doesn’t mean it will support HTTP/2 Push. This gets complicated when multiple pieces of infrastructure is involved like this (what if the ALB and Nginx both support push but client doesn’t) and best advice is to push from the edge node, especially if it supports that via HTTP preload header instructions.
You are probably better using a level 4 (TCP) load balancer to get this to work.

Related

How to use nginx to direct traffic to two different ports on the same device?

I am currently working on a FPV robotics project that has two servers, flask/werkzeug and streamserver, serving http traffic and streaming video to an external web server, located on a different machine.
The way it is currently configured is like this:
http://1.2.3.4:5000 is the "web" traffic (command and control) served by flask/werkzeug
http://1.2.3.4:5001 is the streaming video channel served by streamserver.
I want to place them behind a https reverse proxy so that I can connect to this via https://example.com where "example.com" is set to 1.2.3.4 in my external system's hosts file.
I would like to:
Pass traffic to the internal connection at 1.2.3.4:5000 through as a secure connection. (certain services, like the gamepad, won't work unless it's a secure connection.)
Pass traffic to 1.2.3.4:5001 as a plain-text connection on the inside as "streamserver" does not support HTTPS connections.
. . . such that the "external" connection (to ports 5000 and 5001 are both secure connections as far as the outside world is concerned, such that:
[external system]-https://example.com:5000/5001----nginx----https://example.com:5000
\---http://example.com:5001
http://example.com:5000 or 5001 redirects to https.
All of the literature I have seen so far talks about:
Routing/load-balancing to different physical servers.
Doing everything within a Kubernates and/or Docker container.
My application is just an every-day plain vanilla server type configuration, and the only reason I am even messing with https is because of the really annoying problems with things not working except in a secure context which prevents me from completing my project.
I am sure this is possible, but the literature is either hideously confusing or appears to talk to a different use case.
A reference to a simple how-to would be the most usefull choice.
Clear and unambiguous steps would also be appreciated.
Thanks in advance for any help you can provide.
This minimal config should provide public endpoints:
http://example.com/* => https://example.com/*
https://example.com/stream => http://1.2.3.4:5001/
https://example.com/* => https://1.2.3.4:5000/
# redirect to HTTPS
server {
listen 80;
listen [::]:80;
server_name example.com
www.example.com;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com
www.example.com;
ssl_certificate /etc/nginx/ssl/server.cer;
ssl_certificate_key /etc/nginx/ssl/server.key;
location /stream {
proxy_pass http://1.2.3.4:5001/; # HTTP
}
# fallback location
location / {
proxy_pass https://1.2.3.4:5000/; # HTTPS
}
}
First, credit where credit is due: #AnthumChris's answer is essentially correct.  However, if you've never done this before, the following additional information may be useful:
There is actually too much information online, most of which is contradictory, possibly wrong, and unnecessarily complicated.
It is not necessary to edit the nginx.conf file.  In fact, that's probably a bad idea.
The current open-source version of nginx can be used as a reverse proxy, despite the comments on the nginx web-site saying you need the Pro version.  As of this instant date, the current version for the Raspberry Pi is 1.14.
After sorting through the reams of information, I discovered that setting up a reverse proxy to multiple backend devices/server instances is remarkably simple.  Much simpler than the on-line documentation would lead you to believe.
 
Installing nginx:
When you install nginx for the first time, it will report that the installation has failed.  This is a bogus warning.  You get this warning because the installation process tries to start the nginx service(s) and there isn't a valid configuration yet - so the startup of the services fails, however the installation is (likey) correct and proper.
 
Configuring the systems using nginx and connecting to it:
 
Note: This is a special case unique to my use-case as this is running on a stand-alone robot for development purposes and my domain is not a "live" domain on a web-facing server.  It is a "real" domain with a "real" and trusted certificate to avoid browser warnings while development progresses.
It was necessary for me to make entries in the robot's and remote system's HOSTS file to automagically redirect references to my domain to the correct device, (the robot's fixed IP address), instead of directnic's servers where the domain is parked.
 
Configuring nginx:
The correct place to put your configuration file, (on the raspberry pi), is /etc/nginx/sites-available and create a symlink to that file in /etc/nginx/sites-enabled
It does not matter what you name it as nginx.conf blindly imports whatever is in that directory.  The other side of that is if there is anything already in that directory, you should remove it or rename it with a leading dot.
nginx -T is your friend!  You can use this to "test" your configuration for problems before you try to start it.
sudo systemctl restart nginx will attempt to restart nginx, (which as you begin configuration, will likely fail.)
sudo systemctl status nginx.service > ./[path]/log.txt 2>&1 is also your friend.  This allows you to collect error messages at runtime that will prevent the service from starting.  In my case, the majority of the problems were caused by other services using ports I had selected, or silly mis-configurations.
Once you have nginx started, and the status returns no problems, try sudo netstat -tulpn | grep nginx to make sure it's listening on the correct ports.
 
Troubleshooting nginx after you have it running:
Most browsers, (Firefox and Chrome at least) support a "developer mode" that you enter by pressing F-12.  The console messages can be very helpful.
 
SSL certificates:
Unlike other SSL servers, nginx requires the site certificate to be combined with the intermediate certificate bundle received from the certificate authority by using cat mycert.crt bundle.file > combined.crt to create it.
 
Ultimately I ended up with the following configuration file:
Note that I commented out the HTTP redirect as there was a service using port 80 on my device.  Under normal conditions, you will want to automatically re-direct port 80 to the secure connection.
Also note that I did not use hard-coded IP addresses in the config file.  This allows you to reconfigure the target IP address if necessary.
A corollary to that is - if you're redirecting to an internal secure device configured with the same certificates, you have to pass it through as the domain instead of the IP address, otherwise the secure connection will fail.
 
#server {
# listen example.com:80;
# server_name example.com;
# return 301 https://example.com$request_uri;
# }
# This is the "web" server (command and control), running Flask/Werkzeug
# that must be passed through as a secure connection so that the
# joystick/gamepad works.
#
# Note that the internal Flask server must be configured to use a
# secure connection too. (Actually, that may not be true, but that's
# how I set it up. . .)
#
server {
listen example.com:443 ssl;
server_name example.com;
ssl_certificate /usr/local/share/ca-certificates/extra/combined.crt;
ssl_certificate_key /usr/local/share/ca-certificates/extra/example.com.key;
ssl_prefer_server_ciphers on;
location / {
proxy_pass https://example.com:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# This is the video streaming port/server running streamserver
# which is not, and cannot be, secured. However, since most
# modern browsers will not mix insecure and secure content on
# the same page, the outward facing connection must be secure.
#
server {
listen example.com:5001 ssl;
server_name example.com;
ssl_certificate /usr/local/share/ca-certificates/extra/combined.crt;
ssl_certificate_key /usr/local/share/ca-certificates/extra/www.example.com.key;
ssl_prefer_server_ciphers on;
# After securing the outward facing connection, pass it through
# as an insecure connection so streamserver doesn't barf.
location / {
proxy_pass http://example.com:5002;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Hopefully this will help the next person who encounters this problem.

reverse proxy with nginx ssl passthrough

I have several ISS Webservers hosting multiple web applications on each IIS server.
The do have a public certificate on each system.
Every IIS has an unique IP.
All IIS Server are placed in the same DMZ
I have setup an nginx System in another DMZ.
My goal is, to have nginx handle all the requests to the IIS from the Internet and JUST passthrough all the SSL and certificates checking to the IIS. So as it was before nginx. I don't want to have nginx break up the certificates, or offloads them etc.
Before I try to rumble with nginx reverse proxy to get it done (since I'm not very familiar with nginx), my question would be, if this is possible?
Believe me I've googled times and times and could not find something which answers my question(s)
Or maybe I'm too dumb google correctly. I've searched even for passthrough, or reverse proxy, offloading.
So far I've gathered, nginx needs probably some extra mods. Since I have a "apt-get" Installation, I don't even know how to add them.
nevermind I found the solution:
Issue:
Several Webservers with various applications on each are running behind a FW and responding only on Port 443
The Webservers have a wildcard Certificate, they are IIS Webservers(whoooho very brave), have public IP addresses on each
It is requested, that all webserver should not be exposed to the Internet and moved to a DMZ
Since IP4 addresses are short these days, it is not possible get more IPs addresses
Nginx should only passthrough the requests. No Certificate break, decrypt, re-encrypt between webserver and reverse proxy or whatsoever.
Solution:
All websservers should be moved to a internal DMZ
A single nginx reverse proxy should handle all requests based on the webservers DNS entries and map them. This will make the public IP4 address needs obsolete
All webservers would get a private IP
A wild certificate would be just fine to handle all aliases for DNS forwarding.
Steps to be done:
1. A single nginx RP should be placed on the external-DMZ.
2. Configure nginx:
- Install nginx on a fully patched debian with apt-get install nginx. At this Point
you'll get Version 1.14 for nginx. Of course you may compile it too
If you have installed nginx by the apt-get way, it will be configured with the following modules, which you will need later: ngx_stream_ssl_preread, ngx_stream_map, and stream. Don't worry, they are already in the package. You may check with nginx -V
4. external DNS Configuration:
- all DNS request from the Internet should point the nginx.
E.g webserver1.domain.com --> nginx
webserver2.domain.com --> nginx
webserver3.domain.com --> nginx
5. Configuration nginx reverse-proxy
CD to /etc/nginx/modules-enabled
vi a filename of your choice (e.g. passtru)
Content of this file:
enter code here
stream {
map $ssl_preread_server_name $name {
webserver01.domain.com webserver01_backend;
webserver02.domain.com webserver02_backend;
}
upstream support_backend {
server 192.168.0.1:443; # or DNS Name
}
upstream intranet_backend {
server 192.168.0.2:443; # or DNS Name
}
log_format basic '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr" '
'"$upstream_bytes_sent" "$upstream_bytes_received"
"$upstream_connect_time"';
access_log /var/log/nginx/access.log basic;
error_log /var/log/nginx/error.log;
server {
listen 443;
proxy_pass $name; # Pass allrequests to the above defined variable container $name
ssl_preread on;
}
}
6. Unlink the default virtual webserver
rm /etc/nginx/sites-enabled/default
7. Redirect all http traffic to https:
create a file vi /etc/nginx/conf.d/redirect.conf
add following code
enter code here
server {
listen 80;
return 301 https://$host$request_uri;
}
test nginx -t
reload systemctl reload nginx
Open up a browser and check the /var/log/nginx/access.log while calling the webservers
Finish

How to configure a proxy with a subdomain servername

I have the following vhost configuration in nginx:
upstream mybackendsrv {
server backend:5432;
}
server {
listen 80;
server_name sub.domain.org;
location / {
proxy_pass http://mybackendsrv;
}
}
When I use a server_name like sub.domain.org, I get the default nginx fallback and my server is not matched.
When I use a server_name like customroute, I get the correct behaviour and my server is matched.
I googled this issue a bit and I believe that subdomain matching is supported in nginx so I'm not sure what's wrong. I checked the access.log and error.log and I get no relevant log.
Any idea how to diagnose this?
I should be able to display route matching logic in debug mode in nginx, but I'm not sure how to accomplish this.
Any help is appreciated.
After investigation, it seems the problem was unrelated to the fact that our URL was a subdomain.
To debug the situation, a $host variable was introduced in the log_format directive in /etc/nginx/nginx.conf:
log_format main '$remote_addr - $host - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
This $host variable allowed to understand that there was a problem with sub.domain.org: when we accessed sub.domain.org, the host was changed to the NGINX server's hostname, contrary to customroute which host was not changed.
It appears sub.domain.org was not a simple DNS config but was an Apache proxy pass configuration. Apache was changing the host name when passing the request, causing NGINX to not match the rewritten host because it received in the request host it's own host instead of the target host.
To correct this behavior, we had to add the following configuration in Apache: ProxyPreserveHost on.
Once we restarted Apache, it the host was preserved and our server_name sub.domain.org was correctly matched in NGINX.

How can I host multiple apps under one domain name?

Say I own a domain name: domain, and I host a static blog at www.domain.com. The advantage of having a static site is that I can host it for free on sites like netlify.
I'd now like to have several static webapps under the same domain name, so I don't have to purchase a domain for each webapp. I can do this by adding a subdomain for my apps. Adding a subdomain is easy enough. This video illustrates how to do it with GoDaddy for example. I can create a page for my apps called apps.domain.com where apps is my subdomain.
Say, I have several static webapps: app1, app2, app3. I don't want a separate subdomain for each of these, e.g., app1.domain.com. What I'd like instead is to have each app as a subfolder under the apps subdomain. In other words, I'd like to have the following endpoints:
apps.domain.com/app1
apps.domain.com/app2
apps.domain.com/app3
At the apps.domain.com homepage, I'll probably have a static page listing out the various apps that can be accessed.
How do I go about setting this up? Do I need to have a server of some sort (e.g., nginx) at apps.domain.com? The thing is I'd like to be able to develop and deploy app1, app2, app3 etc. independently of each other, and independently of the apps subdomain. Each of these apps will probably be hosted by netlify or something similar.
Maybe there's an obvious answer to this issue, but I have no idea how to go about it at the moment. I would appreciate a pointer in the right direction.
Something along the lines of below should get you started if you decide to use nginx. This is a very basic setup. You may need to tweak it quite a bit to suit your requirements.
apps.domain.com will serve index.html from /var/www
apps.domain.com/app1 will server index.html from /var/www/app1
apps.domain.com/app2 will server index.html from /var/www/app2
apps.domain.com/app3 will server index.html from /var/www/app3
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
index index.html;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name apps.domain.com;
root /var/www;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /app1 {
}
location /app2 {
}
location /app3 {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I initially solved this problem using nginx. But, I was very unhappy with that because I needed to pay for a server, and needed to set up the architecture for it etc.
The easiest way to do this, that I know of today, is to make use of URL rewrites. E.g. Netlify rewrites, Next.js rewrites.
Rewrites allow you to map an incoming request path to a different destination path.
Here is an example usage in my website.
Just one addition: if you're hosting the apps on an external server you might want to setup nginx and use the proxy plugin to forward incoming requests from your nginx installation to the external webserver:
web-browser -> nginx -> external-web-server
And for the location that needs to be forwarded:
location /app1 {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass https://url-of-external-webserver;
}
It would seem that you're asking the question prematurely — what actual issues are you having when doing what you're trying to do using the naive approach?!
It is generally the best idea to have each app run on its own domain or subdomain; this is done to prevent XSS attacks, where vulnerability in one of your apps may result in your whole domain becoming vulnerable. This is because security features are generally implemented in the browser on a per-domain basis, where it is presumed that the whole domain is under the control of a single party (e.g., running a single app, at the end of the day).
Otherwise, there's really nothing that special that must be done to have multiple apps on a single domain. Provided that your paths within each app are correct (e.g., they're either relative, or absolute with the full path of the location of the specific app), there's really not any specific issues to be aware of, frankly.

NGINX -- show cached IPs for host names in config files?

[SHORT VERSION] I understand when NGINX looks at a config file, it does DNS lookups on the hostnames in it, and then stores the results (IP addresses the hostnames should resolve to) somewhere and uses them until the next time it looks at a config file (which, to my understanding, is not until the next restart by default). Is there a way to see this hostnames-to-ips mapping that my currently-running NGINX service has? I am aware there are ways to configure my NGINX to account for changes in IPs for a hostname. I wish to see what my NGINX currently thinks it should resolve my hostname to.
[Elaborated] I'm using the DNS name of an AWS ELB (classic) as the hostname for a proxy_pass. And since both the public and private IPs of an AWS ELB can change (without notice), whatever IP(s) NGINX has mapped for that hostname at the start of its service will become outdated upon such change. I believe the IP-change just happened for me, as my NGINX service is forwarding traffic to a cluster different than what is specified in its config. Restarting the NGINX service fixes the problem. But, again, I'm looking to SEE where NGINX currently thinks it should send the traffic to, not how to fix it or prevent it (plenty of resources online for working with dynamic upstreams, which I evidently should have consumed prior to deploying my NGINX services...).
Thank you in advance!
All you need is the resolver option.
http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
With this option nginx will lookup DNS changes without restarting. But only for proxy_pass directive. This wont work, if you are using upstream. DNS resolve of upstream servers supported only in Nginx PLUS version.
If you want to know IP of upstream server, there is few ways:
- in PLUS version you can use status module or upstream_conf module, but PLUS version is not free
- some 3rd party status modules
- write this IP to log with each request, just add $upstream_addr variable to your custom access log. $upstream_addr contains IP address of backend server used in current request. Example of config:
log_format upstreamlog '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent $upstream_addr';
server {
...
access_log /tmp/test_access_log upstreamlog;
resolver ip.of.local.resolver;
location / {
set $pass dns_name.of.backend;
proxy_pass http://$pass;
}
}
Note: always use variable for proxy_pass - only in this case resolver will be used. Example of log:
127.0.0.1 - - [10/Jan/2017:02:12:15 +0300] "GET / HTTP/1.1" 200 503 213.180.193.3:80
127.0.0.1 - - [10/Jan/2017:02:12:25 +0300] "GET / HTTP/1.1" 200 503 213.180.193.3:80
.... IP address changed, nginx wasn't restarted ...
127.0.0.1 - - [10/Jan/2017:02:13:55 +0300] "GET / HTTP/1.1" 200 503 93.158.134.3:80
127.0.0.1 - - [10/Jan/2017:02:13:59 +0300] "GET / HTTP/1.1" 200 503 93.158.134.3:80

Resources