Nginx S3 proxy: Attempt to serve index.html on the first 404 - nginx

I have a really simple setup, I just want nginx to act a webserver for S3.
Meaning that if it hits a folder instead of returning key not found from S3 I want it to return the index.html of that possible folder.
Chrome does this automatically, but I am using it for a Python Pip repo, which doesn't automatically try the index.html.
location / {
aws_sign;
proxy_pass http://[my bucket].s3.amazonaws.com;
}
All the answer that I've been able to find revolve around serving files instead, but that's not what I want. What I want is for nginx to retry the proxy with a different [url] namely [url]/index.html.

Related

Static not existing though being declared in nginx default config

I'm trying to get my static files accessible so that I can load them with Flask.
ex: https://example.com/static/render.css
I've included this in the https part of my site:
location /static/ {
autoindex on;
alias /root/site/static;
}
But it just returns a 403 always.
The directory is correct and all, I've also ran the chown -R /root/site/static/ command. Idk what's the issue.
Nginx is definitely doing something with the data though, as all other extensions just reply a badgateway.
I decided to just write my CSS inline.

What are the nginx configuration settings to host a Rblogdown site on a digital ocean droplet?

I'm trying to host a static site on a digital ocean droplet and am having a bit of trouble in deploying the site. I've searched for solutions across the interwebs with limited success and I'm also learning while doing so to speak, so any help would be greatly appreciated. I also apologize if I am using incorrect terms and if I am giving too much or irrelevant information. I'd rather err on the side of too much so it might be easy to spot if I did something wrong.
Goal: To have R, Rstudio, R server on a digital ocean droplet and connect to a domain I have purchased. Use the R blogdown package to create a static website and also deploy this on the droplet.
Completed steps:
1) Installed linux on droplet, installed R, R studio, R server, shinny on droplet. This is working fine.
2) Added nameservers to point to my domain to my digital ocean IP. This is working : www.mysite.com:8787 goes to my Rstudio login, www.mysite.com:3838 goes to shinny server. I'd like to change the 8787 and 3838 to something more descriptive, but I'm sure I can figure this out at a later point.
3) Generated a static test site in blogdown. Location is /home/user/website/public, where website is a R blogdown project and public is the folder which has the index.html file and all the generated files from the blogdown site generation. This works within R studio and files are created.
4) Attempted to change the nginx setttings by altering the default file: /etc/nginx/sites-enabled/default to redirect to the /home/user/website/public directory.
This is where I'm stuck. I've followed a couple of guides and whatever I seem to do www.mysite.com always displays the nginx welcome page.
What I've tried:
chmod - R 0755 /home/user/website/public
I belive this gives recursive permission to the folder where my static site is held.
to edit the default file:
sudo nano /etc/nginx/sites-enabled/default
I've changed the root line to my static site directory and server_name line to www.mysite.com (also tried actual IP address)
root /home/user/website/public
server_name www.mysite.com mysite.com
Following some guides, I have also attempted to make a server block and link it.
created a new file, mysite.com containing the following:
server {
listen 80;
listen [::]:80;
root /home/user/website/public;
index index.html index.htm index.nginx-debian.html;
server_name mysite.com www.mysite.com;
location / {
try_files $uri $uri/ =404;
}
}
then link it:
sudo ln -s /etc/nginx/sites-available/mysite.com /etc/nginx/sites-enabled/
I have tried various combinations of these methods from different guides with the same result - a nginx welcome page at www.mysite.com.
Just uninstalled and reinstalled nginx to start with a clean slate.
Am I on the right track here with methodology? Has anyone had success hosting a R blogdown site on a digital ocean droplet and can share some advice or spot what I need to do?

Nginx location and try_files in subdirectory

I'm trying to configure nginx vhost for application and stucked.
App is in directory /site/verb
At this moment app have this kind of links and it is working:
http://example.com/verb/lt.php?some_args=some_args&some_args?some_args
What I need? I need add another link for my clients like below:
http://example.com/v/lt.php?some_args=some_args&some_args?some_args
It is only change from /verb to /v but I want to handle both (for compatibility reasons) with all arguments after .php extension.
Is it possible in nginx config? (I want to avoid creating symlinks in directory).
I tried symlinks but it is not good solution.

Docker/nginx [windows10] - change page on linked folder and show changes directly in the browser

Question: How can I change the index.html file in docker/nginx and see the changed results in my browser?
The index.html is on a private folder linked to the standard nginx folder.
What did I do:
After installing docker and nginx via the official pages, I created a folder under /Users/me123/docker/webapp/html. In this folder I created an index.html file.
I linked this folder to the default nginx folder via the following command. I used both the RO and not-RO versions:
docker run --name nginx -p 80:80 -v //c/Users/me123/docker/webapp/html:/usr/share/nginx/html -d nginx
So, when editing the file (with e.g. Notepad++) I expected to see the updates. Alas, even in firefox with ctrl-F5 or control-refresh-icon. So, this is really not a caching problem. I visited the page via 192.n.n.n/index.html.
When I delete the index.html file, then I got an error. When I put the index.html file back I see the old index.html content.
I saw a post that this may be due to inode-synchronisation/updates. So a 'docker restart nginx' would be sufficient. Alas. Even the sequence 'docker stop nginx' and 'docker rm nginx-package-via-number' won't help.
When I add a new file to my local /Users/me123/docker/webapp/html folder, I immediately see the contents.
So, how can I change the index.html file and see the changed results in my browser?
I tried to add tags in my index.html to prohibit caching - alas.
The solution is by changing the nginx.conf file.
Use the following and the problem is solved:
sendfile off;
By default it is set on.

Applying NGINX location based on extension of default file.

I'm trying to create an alternate location in NGINX that will only fire for a specific file type. Specifically, I have NGINX acting as a proxy for a server that primarily serves PHP files. There are, however, a bunch of folders that also have ASPX files (more than 120), and I need to use a different configuration when serving them (different caching rules, different Modsecurity configuration, etc).
NGINX is successfully detecting the file type and applying the alternate location when the file name is specifically listed, but it's breaking when the ASPX file is the default file in the folder and the URL simply ends in a slash. When that happens, it's just applying the root location configuration. Is there a way to detect the extension of an index file and apply an alternate location, even when the name of the index file isn't specifically entered?
server {
listen 80;
server_name mysite.com;
# serves php files and everything else
location / {
#general settings applicable to most files
proxy_pass http://#backend;
}
#serves .Net files
location ~* \.(aspx|asmx) {
#slightly different settings applicable to .Net files
proxy_pass http://#backend;
}
}
If a folder has a file called "default.aspx" configured as it's index, the above configuration works perfectly if I enter the url as mysite.com/folder/default.aspx, but it applies only the base location if I enter it as mysite.com/folder, even though it is serving the exact same default.aspx file.
The only solution I've found is to alter the location directive to identify by the folder name instead of the file extension, but this doesn't scale well as there are more than 120 affected folders on the server and I'd end up with a huge conf file.
Is there any way to specify a location by file extension, when the file isn't specifically named in the URL? Can I test a folders index file to determine its extension before a location is applied?

Resources