I've tried since hours with lots of solutions but cannot get rid of this 403 error on serving a static subdomain with NGINX.
I've tried chmod all my permissions in the directory to the static folder and editing the config file over and over.
NGINX serves beautifully my reverse proxied Node app but shuts down all the static subdomains that once were in the server.
Permissions:
dr-xr-xr-x root root /
drwxr-xr-x root root home
drwx--x--x ca****8sh nginx ca****8sh
lrwxrwxrwx ca****8sh ca****8sh www -> public_html
drwxr-x--- ca****8sh ca****8sh public_html
drwxr-xr-x nginx nginx residenza******.******ano.ch;
config file:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name residenza******.******ano.ch;
root /home/ca****8sh/www/residenza******.******ano.ch/;
location / {
#try_files $uri $uri/ =404;
index index.html;
#autoindex on;
#autoindex_exact_size off;
}
[continues with SSL setup]
I've also tried tweaking things around like enabling autoindex but to no avail.
I'm on despair, please help!
Check which user nginx is using in first line of nginx.conf. It should be either nginx or www-data. then run this command. Replace www-data with nginx if the user is nginx
chown -R www-data /home/ca****8sh/www/residenza******.******ano.ch/
If you are using SELinux like CentOS, run these commands too:
sudo setsebool -P httpd_can_network_connect on
chcon -Rt /home/ca****8sh/www/residenza******.******ano.ch/
I have handled by changing the global nginx user to a higher tier user. This is what was causing the permits issue.
Related
I'm using 'Oracle Cloud'.
I created a VM(Computer instance) on Oracle Cloud with CentOS 8. And I installed NginX, and it works well when I test it with 'http://mydeal.servername.com'.
To make NginX service with HTTPS, I also installed certbot(Let's Encrypt) and created certificate, using the following command.
sudo certbot --standalone -d mydeal.servername.com certonly
Result files were like below.
Cert : /etc/letsencrypt/live/mydeal.servername.com/fullchain.pem;
Key : /etc/letsencrypt/live/mydeal.servername.com/privkey.pem;
I added http and https to firewall service list like below.
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
And I created test index.html like below.
sudo -i
mkdir /var/www
mkdir /var/www/mydeal
echo "MyDeal at Oracle Cloud" > /var/www/mydeal/index.html
And I created https settings, including http redirection, in /etc/nginx/conf.d/my.conf file.
server {
listen 80;
server_name my.servername.com;
location / {
root /var/www/mydeal;
index index.html;
try_files $uri /index.html;
}
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name mydeal.servername.com;
ssl_certificate /etc/letsencrypt/live/mydeal.servername.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydeal.servername.com/privkey.pem;
location / {
root /var/www/mydeal;
index index.html;
try_files $uri /index.html;
}
}
Finally, when I start nginx server with the following command, it works well.
sudo -i
sudo nginx
But, when I start nginx server with the following command, it gives error "500 Internal Server Error" on the browser screen.
sudo systemctl enable nginx
sudo systemctl start nginx
I can not find any differences b/w 2 start procedures.
How I can debug this problem?
EDIT:
Ok, very strange still. It seems that it does not work on my main browser. In incognito browsers or just a completely new chrome window the sites now do work. I guess it has something to do with browser caching?
So I am hosting my website on Digital Ocean and I want to host multiple 'websites' on 1 droplet/server. By multiple websites, I mean different subdomains of my main domain. So I want to host admin.domain.com, dev.domain.com and test.domain.com on the same server. I followed this tutorial, but it is not working like expected. Currently, only 1 subdomain of the 3 is working...
What have I tried so far?
First of, I created 3 A records in my DNS all pointing to the same server_ip droplet on Digital Ocean.
I've created a different a different folder for each subdomain in the /var/www folder, each containing a html folder with a simple index.html file and some html:
The image above shows my /var/www folder.
I then used the following command sudo chmod -R 755 /var/www.
Next, I copied the default server block file and used this as the default for a new server block with the following command:
sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/admin.domain.nl
I changed the contents of the file in all 3 config files for the 3 subdomains to the following (obviously changing the root to the specific subdomain aswell as the server_name):
server {
listen 80;
listen [::]:80;
root /var/www/admin.domain.nl/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
server_name admin.domain.nl;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ =404;
}
}
I then used the following command: sudo ln -s /etc/nginx/sites-available/dev.domain.nl /etc/nginx/sites-enabled/ 3 times for the 3 different subdomains to 'enable' the server blocks.
This is my sites-enabled folder:
I've had no syntax errors and thus restarted nginx with: sudo systemctl restart nginx.
The problem
Now, for some very odd reason I do not understand, only the admin.domain.nl site is working. The other 2 subdomains simply display: This site can’t be reached.
What am I missing here?
IN /etc/nginx/nginx.conf
http {
# ...........................others contents.....................................
# in bottom
server {
listen 80;
root /var/www/html/cmp/api;
server_name "cmpapi.localhost";
index index.html index.php;
location / {
try_files $uri $uri/ $uri.html $uri.php =404;
}
}
server {
listen 80;
root /var/www/html/cmp/frontend;
server_name "cmp.localhost";
index index.html index.php;
location / {
try_files $uri $uri/ $uri.html $uri.php =404;
}
}
}
Here my project name is cmp. here 2 project
one is react frontend
another is laravel api project
Here 2 folder created in
/var/www/html/cmp/api
This assigned to http://cmpapi.localhost (server_name)
/var/www/html/cmp/frontend
This assigned to http://cmp.localhost (server_name)
cmd
sudo find /var/www/html/cmp/frontend/ -type f -exec chmod 644 {} \;
sudo find /var/www/html/cmp/frontend/ -type d -exec chmod 755 {} \;
sudo find /var/www/html/cmp/api/ -type f -exec chmod 644 {} \;
sudo find /var/www/html/cmp/api/ -type d -exec chmod 755 {} \;
systemctl reload nginx
Browser
http://cmpapi.localhost
http://cmp.localhost
I am using nginx with http webdav module to upload files and delete files to server I am able to upload files successfully and delete but the issue I am facing is that files it uploads does not have execute permissions
-rw-rw-rw- 1 nginx nginx 1583670 Apr 19 17:20 startup.jpg
where as the folder it creates have all the permissions
drwxrwxrwx 2 nginx nginx 4096 Apr 19 16:27 s
I tried adding rwx to the nginx config but I get error it works fine with rw but with x it gives error
Apr 19 17:23:20 CDNSTORE nginx[18386]: nginx: [emerg] invalid value "group:rwx" in /etc/nginx/conf.d/webdav.conf:11
Following is my nginx config
server {
listen 80;
server_name localhost;
root /home/webdav/files;
client_body_temp_path /home/webdav/tmp;
location / {
dav_methods PUT DELETE MKCOL COPY MOVE;
# dav_ext_methods PROPFIND OPTIONS;
create_full_put_path on;
dav_access user:rw group:rwx all:rwx;
autoindex on;
client_max_body_size 1G; # File size limit for new files
auth_basic "closed site";
auth_basic_user_file /home/webdav/.htpasswd;
}
}
I want the images files to be accessible from url for that need to set -rwxr-xr-x permissions .
I dont want to use cron and shell to set permission because I need to create folders and subfolder on the fly and folder needs permission so webdav can delete so looking for a solution that nginx set permission when uploading the file
I've been playing around with my site's conf file (nginx) with little success. All I want to do is when people type "example.com" or "http://example.com" in the address bar for it to automatically redirect them to "https://www.example.com".
Try something like this in the nginx config file for your site. If you are on Ubuntu, this file is likely located at something like /etc/nginx/sites-available/example.
server {
listen 80;
listen [::]:80 default_server;
server_name example.com;
return 301 https://www.example.com$request_uri;
}
Once you have update the config file, be sure to test and restart NGINX using sudo nginx -t && sudo service nginx reload
Sorry for noob question, I suck at Ubuntu.
I have just installed nginx in a Ubuntu server with:
sudo apt-get update
sudo apt-get -y install nginx
It successfully built. I'm trying to change the index page, so I have modified my /usr/share/nginx/html/index.html, and then tried all of these:
sudo service nginx stop
sudo service nginx start
sudo service nginx restart
But when I refresh the root page on my browser it still shows the old page.
This is what the index.html looks like:
I have checked my /etc/nginx/nginx.conf, but don't find anything particular there.
What could I be missing?
If you had checked vhost, you knowned, root directory is /var/www/html...
vhost is in /etc/nginx/sites-available and /etc/nginx/sites-enabled (sites-enabled is symlink).
The correct configuration file for NGINX on Debian is :
/var/www/html/index.nginx-debian.html
If you update this file, the changes will be reflected immediately, without a start/stop or restart.
sudo service nginx stop
sudo service nginx start
sudo service nginx restart
I have same problem before, then after updating nginx conf by moving 'root' from 'server/location' to 'server', it works well. Nginx config file :
server {
listen 443 ssl;
server_name localhost;
root /usr/share/nginx/html/rdist;
location /user/ {
proxy_pass http://localhost:9191;
}
location /api/ {
proxy_pass http://localhost:9191;
}
location /auth/ {
proxy_pass http://localhost:9191;
}
location / {
index index.html index.htm;
if (!-e $request_filename){
rewrite ^(.*)$ /index.html break;
}
}