Nginx http2_push issue - nginx

I am trying to implement http2_push using nginx on windows 7. I followed steps mentioned in this article.
I'm running nginx 1.13.12 executable version. Have created & installed self signed certificates and it is working fine.
As mentioned in this answer, I checked and solved the certificate validation issue as well.
Still the files I want to push is not getting pushed into the browser. I am checking it through the network tab in inspector (Google Chrome - Screenshot attached).
nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 443 ssl http2;
server_name localhost;
ssl_certificate ssl/localhost.crt;
ssl_certificate_key ssl/localhost.key;
location = /test.html {
root html;
http2_push /stylepush.css;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Output (Screenshot):
Can anyone help me out where I am going wrong? Thanks for the help in advance.

HTTP/2 push only works when the pushed resource is needed by the page (i.e. it's referenced in the HTML). In this case, the fact that /stylepush.css is not loaded by the page at all (never mind by Push as the initiator) shows it is not being used.
If you go to chrome://net-internals/#http2 you should see this as an unclaimed push:
Add a reference to this CSS file in your HTML and you should see it as pushed.
If not, then go to chrome://net-internals/#events&q=type:HTTP2_SESSION in Chrome and provide the HTTP/2 Session data.
Additionally Chrome requires a recognised certificate before it allows you to cache resources (and HTTP/2 resources are pushed into a cache before they are uses). Since Chrome Version 58, they also require the Subject Alternative Name (SAN) to be be set on the certificate, which requires some extra config to set when creating a self-signed certificate.

Related

How can I host multiple apps under one domain name?

Say I own a domain name: domain, and I host a static blog at www.domain.com. The advantage of having a static site is that I can host it for free on sites like netlify.
I'd now like to have several static webapps under the same domain name, so I don't have to purchase a domain for each webapp. I can do this by adding a subdomain for my apps. Adding a subdomain is easy enough. This video illustrates how to do it with GoDaddy for example. I can create a page for my apps called apps.domain.com where apps is my subdomain.
Say, I have several static webapps: app1, app2, app3. I don't want a separate subdomain for each of these, e.g., app1.domain.com. What I'd like instead is to have each app as a subfolder under the apps subdomain. In other words, I'd like to have the following endpoints:
apps.domain.com/app1
apps.domain.com/app2
apps.domain.com/app3
At the apps.domain.com homepage, I'll probably have a static page listing out the various apps that can be accessed.
How do I go about setting this up? Do I need to have a server of some sort (e.g., nginx) at apps.domain.com? The thing is I'd like to be able to develop and deploy app1, app2, app3 etc. independently of each other, and independently of the apps subdomain. Each of these apps will probably be hosted by netlify or something similar.
Maybe there's an obvious answer to this issue, but I have no idea how to go about it at the moment. I would appreciate a pointer in the right direction.
Something along the lines of below should get you started if you decide to use nginx. This is a very basic setup. You may need to tweak it quite a bit to suit your requirements.
apps.domain.com will serve index.html from /var/www
apps.domain.com/app1 will server index.html from /var/www/app1
apps.domain.com/app2 will server index.html from /var/www/app2
apps.domain.com/app3 will server index.html from /var/www/app3
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
index index.html;
include /etc/nginx/conf.d/*.conf;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name apps.domain.com;
root /var/www;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
location /app1 {
}
location /app2 {
}
location /app3 {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I initially solved this problem using nginx. But, I was very unhappy with that because I needed to pay for a server, and needed to set up the architecture for it etc.
The easiest way to do this, that I know of today, is to make use of URL rewrites. E.g. Netlify rewrites, Next.js rewrites.
Rewrites allow you to map an incoming request path to a different destination path.
Here is an example usage in my website.
Just one addition: if you're hosting the apps on an external server you might want to setup nginx and use the proxy plugin to forward incoming requests from your nginx installation to the external webserver:
web-browser -> nginx -> external-web-server
And for the location that needs to be forwarded:
location /app1 {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass https://url-of-external-webserver;
}
It would seem that you're asking the question prematurely — what actual issues are you having when doing what you're trying to do using the naive approach?!
It is generally the best idea to have each app run on its own domain or subdomain; this is done to prevent XSS attacks, where vulnerability in one of your apps may result in your whole domain becoming vulnerable. This is because security features are generally implemented in the browser on a per-domain basis, where it is presumed that the whole domain is under the control of a single party (e.g., running a single app, at the end of the day).
Otherwise, there's really nothing that special that must be done to have multiple apps on a single domain. Provided that your paths within each app are correct (e.g., they're either relative, or absolute with the full path of the location of the specific app), there's really not any specific issues to be aware of, frankly.

nginx in front of wordpress configuration causes "Request exceeded the limit of 10 internal redirects due to probable configuration error."

I have a site https://example.com running on instance 1 on AWS EC2:
nginx + wildfly app server
Everything works fine. Nginx proxy passes to wildfly. Https is configured.
I would like to set up wordpress running on different EC2 instance so it is accessible via https://example.com/blog
I set up dedicated instance for wordpress and launched wordpress using docker compose as described here: https://docs.docker.com/compose/wordpress/
I configured composer so wordpress is accessible via 80 port on that instance.
I configured ngnix as following:
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name <server_name>;
root /usr/share/nginx/html;
ssl_certificate "/etc/ssl/certs/<path_to_cert>/ssl-bundle.crt";
ssl_certificate_key "/etc/ssl/certs/<path_to_cert>/private.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP;
ssl_prefer_server_ciphers on;
include /etc/nginx/default.d/*.conf;
# main site, deployed on same instance as nginx
location / {
proxy_pass http://127.0.0.1:8080;
}
# wordpress, deployed on other instance
location /blog/ {
proxy_pass http://<my_wordpress_instance_local_ip>/;
proxy_set_header X-Forwarded-Proto $scheme;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I installed wordpress (using direct access to it) and configured WordPress Address (URL) and Site Address (URL) to be "https://example.com/blog" both.
Now I can access admin and home page https://example.com/blog fine. But when I click to see test "Hello World" post (url: https://example.com/blog/2018/09/28/hello-world/) I get 500 Internal Server error.
I looked at wordpress docker log and found following error:
"[Fri Sep 28 18:17:25.721177 2018] [core:error] [pid 308] [client :47792] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: https://example.com/blog/
"
How can I fix that?
Disclaimer: I have no big experience in nginx, not in wordpress, so sorry for maybe stupid question. I tried different options I found on the internet but all of them are either not working or describe manual wordpress setup (not via standard docker image).
UPDATES:
I updated description of nginx conf: now the whole configuration is provided;
My Wordpress Permalink setting is "Day and name". I tried all of them and found that if I switch to "Plain" - everything works with no error. All other settings cause error. Obviously "Plain" does not work me and I would like to proceed with something better than that.
I found that following answer fixes the issue:
https://stackoverflow.com/a/4521620/2033394
It will require to modify image though. So it is not best, but it makes everything to work fine.

docker-registry nginx rest api

I am trying to build a docker-registry server from source (not as a container) on Ubuntu 14.04.1. I was able to get most of the way there using the instructions found on digitalocean.
I am able to curl http://localhost:5000 and https://user:password#localhost:8000 with no problems
When I try to open a web browser to see hopefully more than just that, that is when the issues seem to happen.
Here is my docker-registry file in /etc/nginx/sites-available/:
# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary
upstream docker-registry {
server 192.168.x.x:5000;
}
server {
listen 8000;
server_name docker-registry;
ssl on;
ssl_certificate /etc/nginx/ssl/docker-registry.crt;
ssl_certificate_key /etc/nginx/ssl/docker-registry.key;
proxy_set_header Host $http_host; # required for Docker client sake
X-Real-IP $remote_addr; # pass on real client IP
client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
# required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
chunked_transfer_encoding on;
location / {
# let Nginx know about our auth file
auth_basic "Restricted";
auth_basic_user_file docker-registry.htpasswd;
proxy_pass http://docker-registry;
}
location /_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
location /v1/_ping {
auth_basic off;
proxy_pass http://docker-registry;
}
}
I have my docker registry stored locally in /var/docker-registry and ensured that it was readable by the www-data user. Why can I not see my images on the web browser?
If I tag an image and push it to my repository it works, I can see it in the web browser:
https://192.168.x.x:8000/v1/repositories/ubuntu-test/tags/latest
I see the following:
"5ba9dab47459d81c0037ca3836a368a4f8ce5050505ce89720e1fb8839ea048a"
When I try to get to:
https://192.168.x.x:8000/v1
Or:
https://192.168.x.x:8000/v1/repositories
Or:
https://192.168.x.x:8000/v1/images
I get a "not found" error
How would I be able to see everything in my /var/docker-registry folder (which is where these are stored....and yes, they are owned by the www-data user) through the web interface?
This is by design. Not only is there no reason one would implement the entire url path, but there are severe security implications with implementing it.
I'm assuming you don't have much experience with web programming. There is no directory '/v1/repositories'... etc. Instead, there is a program (in this case either Python or Ruby) that is listening for the url path and has logic built-in to determine what to do.
i.e. if url = /v1/_ping: return 'ok'

nginx webdav could not open collection

I have built nginx on a freebsd system with the following configuration parameters:
./configure ... –with-http_dav_module
Now this is my configuration file:
user www www;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
# reserve 1MB under the name 'proxied' to track uploads
upload_progress proxied 1m;
sendfile on;
#tcp_nopush on;
client_max_body_size 500m;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
#upload_store /var/tmp/firmware;
client_body_temp_path /var/tmp/firmware;
server {
server_name localhost;
listen 8080;
auth_basic "Restricted";
auth_basic_user_file /root/.htpasswdfile;
create_full_put_path on;
client_max_body_size 50m;
dav_access user:rw group:r all:r;
dav_methods PUT DELETE MKCOL COPY MOVE;
autoindex on;
root /root;
location / {
}
}
}
Now, the next things I do are check the syntax of the confiuration file by issuing a nginx -t and then do a graceful reload as follows: nginx -s reload.
Now, when I point my web-browser to the nginx-ip-address:8080 i get the list of my files and folders and so on and so forth (I think that is due to the autoindex on feature).
But the problem is that when I try to test the webdav using cadaver as follows:
cadaver http://nginx-ip-address:8080/
It asks me to enter authorization credentials and then after I enter that it gives me the following error:
Could not open Collection: 405 Not Allowed
And the following is the nginx-error-log line which occurs at the same time:
*125 no user/password was provided for basic authentication, client: 172.16.255.1, server: localhost, request: "OPTIONS / HTTP/1.1", host: "172.16.255.129:8080"
The username and pass work just fine wheni try to access it from the web-browser, then what is happening here?
It turns out that the webdav module in-built in nginx is broken and to enable full webdav, we need to add the following external 3rd party module: nginx-dav-ext-module.
Link to its github: https://github.com/arut/nginx-dav-ext-module.git
The configure parameter would now be:
./configure --with-http_dav_module --add-module=/path/to/the/above/module
The built in one just provides the PUT DELETE MKCOL COPY MOVE dav methods.
The nginx-dav-ext-module adds the following additional dav methods: PROPFIND OPTIONS
You will also need to edit the configuration file to add the following line:
dav_ext_methods PROPFIND OPTIONS;
After doing so check if the syntax of the conf file is intact by issuing: nginx -t
and then soft reload (gracefully) nginx: nginx -s reload
And Voila! you should now be able to use cadaver or any other dav client program to get into the directories.
I cannot believe that I solved this, it drove me nuts for a while!

ember.js application does not update hashtag part of URI with NGINX server

I have an ember.js application I developped on my local machine. I use a restify/node.js server to make it available locally.
When I navigate in my application, the address bar changes like this:
Example 1
1. http://dev.server:3000/application/index.html#about
2. http://dev.server:3000/application/index.html#/items
3. http://dev.server:3000/application/index.html#/items/1
4. http://dev.server:3000/application/index.html#/items/2
I try now to deploy it on a remote test server which runs nginx.
Although everything works well locally, I can navigate into my web application but the part of the URI that is after the hashtag is not updated.
In any browser: http://test.server/application/index.html is always displayed in my address bar. For the same sequence of clicks as in Exemple 1, I always have:
1. http://web.redirection/application/index.html
2. http://web.redirection/application/index.html
3. http://web.redirection/application/index.html
4. http://web.redirection/application/index.html
Moreover, if I directly enter a complete URI http://web.redirection/application/index.html#/items/1 the browser will only display the content that is at http://test.server/application/index.html (which is definitely not the expected behaviour).
I suppose this come from my NGINX configuration since the application works perfectly on a local restify server.
NGINX configuration for this server is:
test.server.conf (which is symlinked into /etc/nginx/sites-enabled/test.server.conf)
server {
server_name test.server web.redirection;
root /usr/share/nginx/test;
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
}
location ~ \.csv$ {
alias /usr/share/nginx/test/$uri;
}
}
nginx.conf
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
EDIT:
Just to be sure that there were no missing files on my test server: I ran a restify/node server (like on my dev machine) and everything works fine when I connect to this server (!). Both nginx and restify servers points to the same files.
EDIT 2
I discovered that my problem happens when I use a web redirection.
If I use an address like http://test.server/application/index.html everything works fine
If I use http://web.redirection/application/index.html it does not work.
So this is my nginx conf that is not correctly redirecting web.redirection URI to test.server or something like that.
Does someone has an idea ? What do I miss ? What should I change to make this work ?
EDIT 3 and solution
The web redirection I used was an A type DNS record. This does not work. Using a CNAME type DNS record solves the issue.
No, this has nothing to do with nginx, any thing past the # is never sent to the server, a javascript code should handle this, I would suggest to use firebug or any inspector to make sure that all your js files are being loaded, and nothing fails with a 404 error, also check for console errors on the inspector console.
The problem came from the DNS redirection from web.redirection to test.server.
It was an A-type record: this does not work.
Using a CNAME-type record that points directly to test.server works.

Resources