Endless Redirect Nginx Server - nginx

i try to setup an nginx server with reverse proxy via a bash script. But somehow i end up in an endless redirect loop but i dont know why. Does someone has an idea?
sudo apt install nginx -y
sudo apt-get remove apache2* -y
sudo apt install ufw -y
sudo ufw allow ssh
sudo ufw allow 'Nginx Full'
sudo systemctl start nginx
sudo systemctl restart nginx
#sudo systemctl status nginx
sudo mkdir -p /var/www/servername.com/html
sudo chmod -R 755 /var/www/servername.com
sudo chown -R $USER:$USER /var/www/servername.com/html
sudo bash -c 'echo "<html> <head> <title>Welcome to servername.com!</title> </head> <body> <h1>Success! The servername.com server block is working!</h1> <b>Meow Meow!</b> </body> </html>" > /var/www/servername.com/html/index.html'
sudo bash -c 'echo "server {
listen 80;
listen [::]:80;
root /var/www/servername.com/html;
index index.html index.htm index.nginx-debian.html;
#You can put your domain name here for ex: www.servername.com
server_name MYIP;
location / {
try_files $uri $uri/ =404;
}
}" > /etc/nginx/sites-available/servername.com'
sudo ln -s /etc/nginx/sites-available/servername.com /etc/nginx/sites-enabled/
sudo systemctl restart nginx

Related

Error 502 - Bad Gateway when trying to set up a server using Cloudflare and Nginx on port 8080

I am setting a website using Nginx and Cloudflare. I followed the steps from a website, but for some reason, every time it keeps dropping the same 502 error. I already checked the hostname and it's the correct, plus Nginx appears to be 100% working. These are the steps that I followed(The guide was made in 2020 but for example, I also checked if things like the Cloudflare IPs where up to date):
sudo apt-get install ufw
sudo ufw status
sudo ufw disable
sudo ufw reset
sudo ufw allow ssh
sudo ufw allow ftp
Up to date ips but still check on cloudflare linked pages, only allow https.
sudo ufw allow from 173.245.48.0/20 to any port https
sudo ufw allow from 103.21.244.0/22 to any port https
sudo ufw allow from 103.22.200.0/22 to any port https
sudo ufw allow from 103.31.4.0/22 to any port https
sudo ufw allow from 141.101.64.0/18 to any port https
sudo ufw allow from 108.162.192.0/18 to any port https
sudo ufw allow from 190.93.240.0/20 to any port https
sudo ufw allow from 188.114.96.0/20 to any port https
sudo ufw allow from 197.234.240.0/22 to any port https
sudo ufw allow from 198.41.128.0/17 to any port https
sudo ufw allow from 162.158.0.0/15 to any port https
sudo ufw allow from 104.16.0.0/12 to any port https
sudo ufw allow from 172.64.0.0/13 to any port https
sudo ufw allow from 131.0.72.0/22 to any port https
sudo ufw allow from 2400:cb00::/32 to any port https
sudo ufw allow from 2606:4700::/32 to any port https
sudo ufw allow from 2803:f800::/32 to any port https
sudo ufw allow from 2405:b500::/32 to any port https
sudo ufw allow from 2405:8100::/32 to any port https
sudo ufw allow from 2a06:98c0::/29 to any port https
sudo ufw allow from 2c0f:f248::/32 to any port https
sudo ufw enable
sudo ufw status
ok now let's install nginx:
sudo apt-get update
sudo apt-get install nginx
some tuto to get commands and stuffs:
https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-18-04
http://archive.vn/sjBvU
let's configure our server:
sudo nano /etc/nginx/sites-available/default (or with ftp)
put the ngninx config file instead, don't forget to change your hostname
ctrl+x
y
server {
if ($host = www.hostname.ltd) {
return 301 https://$host$request_uri;
}
if ($host = hostname.ltd) {
return 301 https://$host$request_uri;
}
listen 80;
server_name hostname.ltd www.hostname.ltd;
return 404;
}
server {
client_max_body_size 100M;
location /robots.txt {
return 200 "User-agent: *
Disallow:";
}
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
real_ip_header CF-Connecting-IP;
client_max_body_size 100M; # max file size for users to upload
}
}
sudo systemctl enable nginx
sudo systemctl start nginx
check for syntax errors:
sudo nginx -t
now let's get a ssl certificate with cerbot and cloudflare that auto renew
get the latest instructions in there
https://certbot.eff.org/lets-encrypt/ubuntubionic-nginx
go to "wildcard" and follow the steps:
sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository universe
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot python3-certbot-nginx
sudo apt-get install python3-certbot-dns-cloudflare
get your Global API key in there
https://dash.cloudflare.com/profile/api-tokens
copy the key somewhere and:
sudo mkdir /root/.secrets/
sudo nano /root/.secrets/cloudflare.ini
then put your token and past this (change mail and api key)
# Cloudflare API credentials used by Certbot
dns_cloudflare_email = myemail#email.com
dns_cloudflare_api_key = my-super-secret-api-key000000
Save
sudo chmod 0700 /root/.secrets/
sudo chmod 0400 /root/.secrets/cloudflare.ini
run this to generate the certificate (don't forget to change the hostname):
sudo certbot certonly \
--dns-cloudflare \
--dns-cloudflare-credentials /root/.secrets/cloudflare.ini \
-d hostname.ltd \
-d www.hostname.ltd
(Note: you are limited to 5 certificate a week per domain by cerbot)
set email address and agree to terms
It can take some times, be patient.
Set automatic renewal:
sudo certbot renew --dry-run
now change the nginx config file to use ssl on port 443:
sudo nano /etc/nginx/sites-available/defaultor by ftp
server {
if ($host = www.hostname.ltd) {
return 301 https://$host$request_uri;
}
if ($host = hostname.ltd) {
return 301 https://$host$request_uri;
}
listen 80;
server_name hostname.ltd www.hostname.ltd;
return 404;
}
server {
listen 443 ssl;
server_name hostname.ltd www.hostname.ltd;
client_max_body_size 100M;
location /robots.txt {
return 200 "User-agent: *
Disallow:";
}
location / {
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
real_ip_header CF-Connecting-IP;
client_max_body_size 100M; # max file size for users to upload
}
ssl_certificate /etc/letsencrypt/live/hostname.ltd/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/hostname.ltd/privkey.pem;
}
Check that everything works:
sudo systemctl restart
nginx sudo nginx -t
Now run:
sudo systemctl start mywebsite1
And you should see MyWebsite1 running on your host name.
You can try to reboot to check that lynxchan start well on boot.
Is there something wrong with these test steps? And how can I fix this problem?

Access error setuping Multiple Domains in nginx

I try to setup Multiple Domains in nginx under ubuntu 18
I created new directory with index file:
mkdir /var/www/test.com
cd /var/www/test.com
sudo nano /var/www/test.com.index.html
with some text text
/var/www/test.com.index.html page sample
I create config file:
sudo nano /etc/nginx/modules-available/test.com
with text:
server {
listen 80;
root /var/www/test.com;
index index.html index.htm index.nginx-debian.html;
server_name test.com;
location / {
try_files $uri $uri/ = 404;
}
}
and in hosts:
sudo nano /etc/hosts
add 1 line :
127.0.0.1 localhost test.com
I create symbol link :
sudo ln -s /etc/nginx/modules-available/test.com /etc/nginx/sites-enabled/
and Check it I see test.com:
cd /etc/nginx/sites-enabled/
ls
sudo service nginx restart
check :
curl test.com
I got :
$ curl test.com
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
but not sample text I enreed above.
File /etc/nginx/nginx.conf is default and has lines :
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
1) Why error and how to fix it ?
2) I suppose I need to disable /etc/nginx/sites-available/default file. How can I do it ?
Modified :
I run
cd /var/www/ # (it root of all aps)
sudo chown -R www-data: .
and restarting with 
sudo systemctl restart php7.2-fpm
sudo service nginx restart
It did not help , I have the same error ...
Can it be some option or misspelling in modules-available/test.com ?

Wordpress installation inside Vagrant box keeps redirecting back to http://localhost

I am having a strange issue at the moment where when I browse to a port-forwarded URI (in this case http://localhost:9001) of a Vagrant box in my browser I get redirected back to localhost (default server). I'm not sure why this is happening and it's really frustrating.
On my guest machine I have the root folder hosted in /var/www/wordpress and this is my /nginx/sites-available/nginx_vhost file:
server {
listen 80;
listen [::]:80 ipv6only=on;
root /var/www/wordpress;
index index.php index.html index.htm;
server_name localhost;
location / {
# try_files $uri $uri/ =404;
try_files $uri $uri/ /index.php?q=$uri&$args;
}
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~* \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
#fastcgi_read_timeout 300;
#proxy_read_timeout 300;
}
}
Here is my Vagrant file:
Vagrant.configure("2") do |config|
# Set Vagrant box to use
config.vm.box = "ubuntu/trusty64"
# Configure port forwarding
config.vm.network :forwarded_port, guest: 80, host: 9001, auto_correct: true
# Set synched folder
config.vm.synced_folder "./", "/var/www/wordpress", create: true, group: "www-data", owner: "www-data"
# Configure the VM
config.vm.provider "virtualbox" do |v|
v.name = "Pet Vets Vagrant Box"
v.customize ["modifyvm", :id, "--memory", "1024"]
end
# Set up shell provisioning
config.vm.provision :shell, :path => "bootstrap.sh"
end
And my bootstrap.sh file:
#!/bin/bash
echo "Provisioning virtual machine..."
apt-get update
echo "Installing Tree..."
apt-get install tree
echo "Installing Git"
apt-get install git -y > /dev/null
echo "Installing Nginx"
apt-get install nginx -y >/dev/null
echo "Configuring Nginx"
cp /var/www/wordpress/nginx_vhost /etc/nginx/sites-available/nginx_vhost > /dev/null
ln -s /etc/nginx/sites-available/nginx_vhost /etc/nginx/sites-enabled/nginx_vhost
rm /etc/nginx/sites-available/default
service nginx restart > /dev/null
echo "Updating PHP repository"
apt-get install python-software-properties build-essential -y > /dev/null
add-apt-repository ppa:ondrej/php5 -y > /dev/null
apt-get update > /dev/null
echo "Installing PHP"
apt-get install php5-common php5-dev php5-cli php5-fpm libssh2-php -y > /dev/null
echo "Installing PHP extensions"
apt-get install curl php5-curl php5-gd php5-mcrypt php5-mysql -y > /dev/null
apt-get install libapache2-mod-php5
The Wordpress installation works fine when I host it on nGinx locally (not using Vagrant) but as soon as I place it in a Vagrant box it doesn't want to play. Does anyone have any ideas?
Thanks!
So the problem wasn't with nGinx it was with the Wordpress config.
I had siteurl and home in the wp_options table both set to http::localhost where it needed to be set to http://localhost:9001.
Hope this helps anyone in the future!
Thanks

Hosting an existing Wordpress installation on a Vagrant box

I have taken over the development of a few websites and am currently trying to get one hosted within a Vagrant box. I am very familiar with Vagrant but am having a strange issue that I have been unable to fix since last Friday.
I have created the Vagrant file and MYSQL database for the Wordpress installation has been moved to my local (host) machine and I point to this from the Wordpress installation on the guest machine. All the Wordpress files exist and the folder is being shared with the guest machine.
My Vagrant file looks as follows:
Vagrant.configure("2") do |config|
# Set Vagrant box to use
config.vm.box = "ubuntu/trusty64"
# Configure port forwarding
config.vm.network :forwarded_port, guest: 80, host: 8930, auto_correct: true
# Set synched folder
config.vm.synced_folder "./", "/var/www", create: true, group: "www-data", owner: "www-data"
# Configure the VM
config.vm.provider "virtualbox" do |v|
v.name = "St. David's Lab"
v.customize ["modifyvm", :id, "--memory", "1024"]
end
# Set up shell provisioning
config.vm.provision :shell, :path => "bootstrap.sh"
end
The boostrap.sh file is used to setup the required software and similar on the guest machine and looks as follows:
#!/bin/bash
echo "Provisioning virtual machine..."
apt-get update
echo "Installing Git"
apt-get install git -y > /dev/null
echo "Installing Nginx"
apt-get install nginx -y >/dev/null
echo "Configuring Nginx"
cp /var/www/nginx_vhost /etc/nginx/sites-available/nginx_vhost > /dev/null
ln -s /etc/nginx/sites-available/nginx_vhost /etc/nginx/sites-enabled/nginx_vhost
rm -rf /etc/nginx/sites-available/default
service nginx restart > /dev/null
echo "Updating PHP repository"
apt-get install python-software-properties build-essential -y > /dev/null
add-apt-repository ppa:ondrej/php5 -y > /dev/null
apt-get update > /dev/null
echo "Installing PHP"
apt-get install php5-common php5-dev php5-cli php5-fpm -y > /dev/null
echo "Installing PHP extensions"
apt-get install curl php5-curl php5-gd php5-mcrypt php5-mysql -y > /dev/null
apt-get install libapache2-mod-php5
And here is the server config that gets created on the guest machine:
server {
listen 80;
server_name localhost;
root /var/www/;
index index.php index.html;
# Important for VirtualBox
sendfile off;
location / {
try_files $uri $uri/ =404;
}
location ~* \.php {
include fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_cache off;
fastcgi_index index.php;
}
}
I have changed the siteurl in the Wordpress database to localhost:8930 as well.
The issue I am having is that when I try and access the address localhost:8930 (as defined in the port forwarding in my Vagrant file) I get redirected back to localhost default index page (http://localhost). It is not a cache issue as I have cleared this, used an incognito window and replaced the index file with a simple "hello world" and it shows.
Can anyone see why this may be happening?
Thanks
The $uri/ clause of the try_files directive is causing an external redirect to the default port. You probably want to avoid external redirects from nginx itself, because it over complicates your 8930 port mapping rule.
One solution is to remove the index directive and the $uri/ clause.
You really need to add a default controller for WordPress anyway, something like:
location / {
try_files $uri /index.php?$args;
}
EDIT:
Detailed analysis of the problem:
You present the URL http://localhost:8930 which is presented to nginx as http://localhost:80. The location / processes the request. The try_files directive tests for the existence of a file and a directory. The presence of a directory causes an external redirect to http://localhost:80/. The external redirect is an undocumented side-effect of the $uri/ clause.
The try_files directive is documented here.

How to deploy web2py using nginx?

web2py is an awesome python framework which has great documentation including several deployment recipes. Yet what I miss there is the recipe for deploying using nginx (preferably with uwsgi). There are some incomplete notes around the web (like here), but I couldn't find any complete, stand-alone guide.
OK, looking closer into the web2py email list that I linked above, I figured out that the copmlete solution is already there. I could follow the instructions and, thanks pbreit's brilliant post, now my deployment works like a charm (using only 38MB RAM in idle state) with nginx+uwsgi.
Here are the parts that I used (I just stripped down the fabfile.py to use it on command line)
Note: where there is 'put('....' I used nano text editor to create and edit files
apt-get -y install build-essential psmisc python-dev libxml2 libxml2-dev python-setuptools
cd /opt;
wget http://projects.unbit.it/downloads/uwsgi-latest.tar.gz
tar -zxvf uwsgi*
mv /opt/uwsgi*/ /opt/uwsgi/
cd /opt/uwsgi/; python setup.py install
chown -R www-data:www-data /opt/uwsgi
touch /var/log/uwsgi.log
chown www-data /var/log/uwsgi.log
apt-get -y install libpcre3-dev build-essential libssl-dev
cd /opt; wget http://nginx.org/download/nginx-0.8.54.tar.gz
cd /opt; tar -zxvf nginx*
cd /opt/nginx*/; ./configure --prefix=/opt/nginx --user=nginx --group=nginx --with-http_ssl_module
cd /opt/nginx*/; make
cd /opt/nginx*/; make install
adduser --system --no-create-home --disabled-login --disabled-password --group nginx
cp /opt/uwsgi*/nginx/uwsgi_params /opt/nginx/conf/uwsgi_params
wget https://library.linode.com/web-servers/nginx/installation/reference/init-deb.sh
mv init-deb.sh /etc/init.d/nginx
chmod +x /etc/init.d/nginx
/usr/sbin/update-rc.d -f nginx defaults
/etc/init.d/nginx start
cd /opt/
wget https://library.linode.com/web-servers/nginx/python-uwsgi/reference/init-deb.sh
mv /opt/init-deb.sh /etc/init.d/uwsgi
chmod +x /etc/init.d/uwsgi
echo 'PYTHONPATH=/var/web2py/ MODULE=wsgihandler' >> /etc/default/uwsgi
/usr/sbin/update-rc.d -f uwsgi defaults
/etc/init.d/uwsgi start
rm /opt/nginx/conf/nginx.conf
# modify nginx.conf below and save it as /opt/nginx/conf/nginx.conf
cd /opt/nginx/conf; openssl genrsa -out server.key 1024
cd /opt/nginx/conf; openssl req -batch -new -key server.key -out server.csr
cd /opt/nginx/conf;
openssl x509 -req -days 1780 -in server.csr -signkey server.key -out server.crt
/etc/init.d/nginx restart
nginx.conf
user www-data;
worker_processes 4;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
keepalive_timeout 2;
sendfile on;
#tcp_nopush on;
tcp_nodelay on;
gzip on;
server {
listen 80;
server_name example.com www.example.com;
location / {
uwsgi_pass 127.0.0.1:9001;
include uwsgi_params;
}
location /static {
root /var/web2py/applications/init/;
}
}
# HTTPS server
server {
listen 443;
server_name www.example.com example.com;
ssl on;
ssl_certificate /opt/nginx/conf/server.crt;
ssl_certificate_key /opt/nginx/conf/server.key;
location / {
uwsgi_pass 127.0.0.1:9001;
include uwsgi_params;
uwsgi_param UWSGI_SCHEME $scheme;
}
location /static {
root /var/web2py/applications/init/;
}
}
}
Derived from web2py email list
With the help from this Linode post
There is a solution here: http://www.web2pyslices.com/slice/show/1495/updated-uwsgi-nginx-script-for-ubuntu-1110

Resources