Debugging 404 error with nginx+uwsgi+flask on Ubuntu - nginx

I'm looking for help for what I assume is a configuration error with nginx...
My uwsgi .ini file is located at /var/www/mysite.com/public_html/api/calc:
[uwsgi]
#application's base folder
base = /var/www/mysite.com/public_html/api/calc
#python module to import
app = hello
module = %(app)
home = %(base)/venv
pythonpath = %(base)
#socket file's location
socket = /var/www/mysite.com/public_html/api/calc/%n.sock
#permissions for the socket file
chmod-socket = 666
#the variable that holds a flask application inside the module imported at line #6
callable = app
#location of log files
logto = /var/log/uwsgi/%n.log
My nginx configuration is using letsencrypt/certbot to serve two domains. The mysite.com domain is also set up to serve an api subdomain. The relevant nginx server block is:
server {
server_name api.mysite.com;
root /var/www/mysite.com/public_html/api;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
location /calc/ {
include uwsgi_params;
uwsgi_pass unix:/var/www/mysite.com/public_html/api/calc/calc_uwsgi.sock;
}
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
I'm trying to create an API located at api.mysite.com/calc/ by adding a location block that specifies the uwsgi gateway defined in /var/www/mysite.com/public_html/api/calc.
With uwsgi running, when I access https://api.mysite.com/calc, the uwsgi log file shows that my python is being invoked and is generating response data:
[pid: 34683|app: 0|req: 24/24] 192.168.1.1 () {56 vars in 1127 bytes} [Sun Feb 21 18:26:01 2021] GET /calc/ => generated 232 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
The nginx access.log shows the access:
192.168.1.1 - - [21/Feb/2021:18:50:33 -0500] "GET /calc/ HTTP/1.1" 404 209 "-" "Mozilla/5.0 (X11; CrOS x86_64 13505.73.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.109 Safari/537.36"
But the client gets back a 404 not found error. There is no entry in the nginx error.log file.
(Obviously) I don't know what I'm doing. I've tried a bunch of different things in the nginx configuration, but no love.
Any suggestions appreciated!
Update
I had assumed that with this configuration, a request to app.mysite.com/calc would be served by the "/" route in my python, but it appears that it is actually served by the "/calc/" route:
#app.route("/")
def hello():
return "Hello World - '/' route "
#app.route("/calc/")
def hellocalc():
return "Hello world - '/calc/' route "
Is there a way to configure this so that my python doesn't need to include 'calc' in all of its routes?

I got this working using the internal routing route-uri directive
[uwsgi]
#try rewrite routing
route-uri = ^/calc/?(.*)$ rewrite:/
This necessitated reinstalling uwsgi after installing the pcre:
sudo apt-get install libpcre3 libpcre
pip3 install -I --no-cache-dir uwsgi
But here's another question: I did not install uwsgi with sudo, so now it lives in
/home/me/.local/bin/uwsgi
Is this good or bad?

Related

nginx SSL (SSL: error:14201044:SSL routines:tls_choose_sigalg:internal error)

I've search a bunch of questions to set the correct configuration for nginx SSL, but my EC2 website isn't online. Actually when It was only HTTP protocol (80) it was working fine.
Steps I made
1 - Set security group for ec2 opening traffic for all ipv4 to access 443 and 80 (ok)
2 - Set /etc/nginx/sites-avaiable and /etc/nginx/sites-eneabled for only HTTP access, that was working fine (ok)
3 - Now started SSL process, creating crypto keys sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/nginx-selfsigned.key -out /etc/nginx/nginx-selfsigned.crt (ok)
4 - Now I modified 'default' file for both /etc/nginx/sites-avaiable and /etc/nginx/sites-eneabled to apply SSL on my website (???)
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name ec2-23-22-52-143.compute-1.amazonaws.com www.ec2-23-22-52-143.compute-1.amazonaws.com;
#Importing ssl
ssl_certificate /etc/nginx/nginx-selfsigned.crt;
ssl_certicate_key /etc/nginx/nginx-selfsigned.key;
# front-end
location / {
root /var/www/html;
try_files $uri /index.html;
}
# node api
location /api/ {
proxy_pass http://localhost:3000/;
}
}
server {
listen 80;
listen [::]:80;
server_name ec2-23-22-52-143.compute-1.amazonaws.com www.ec2-23-22-52-143.compute-1.amazonaws.com;
return 301 https://$server_name$request_uri;
}
5 - Tested configuration sudo nginx -t and it's a ok configuration (ok)
6 - Restarted nginx sudo systemctl restart nginx (ok)
7 - Tested if the necessary ports are being listening sudo netstat -plant | grep 80 sudo netstat -plant | grep 443 and both are being listening (ok)
8 - I should work everything looks great, so I tried to enter to website and for my surprise it's offline with error "ERR_CONNECTION_CLOSED"
https://ec2-23-22-52-143.compute-1.amazonaws.com/
9 - The unique thing that rest to check is the nginx error logs at /var/log/nginx/ , and there are this ERROR related to SSL
2022/04/07 19:24:25 [crit] 2453#2453: *77 SSL_do_handshake() failed (SSL: error:14201044:SSL routines:tls_choose_sigalg:internal error) while SSL handshaking, client: 45.56.107.29, server: 0.0.0.0:443
Conclusion
I don't why SSL_do_handshake() failed what I can do to fix this issue, anyone has a guess to solve this problem. Thanks a lot for stackoverflow comunnity you are great !!!

Gitlab ssl in subdomain validation failed

I want to install gitlab in my nginx server.
I follow this instruction for the install.
gitlab-ctl reconfigure give me :
There was an error running gitlab-ctl reconfigure:
letsencrypt_certificate[gitlab.domain.dev] (letsencrypt::http_authorization line 5) had an error: RuntimeError: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 25) had an error: RuntimeError: ruby_block[create certificate for gitlab.domain.dev] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/acme/resources/certificate.rb line 108) had an error: RuntimeError: [gitlab.domain.dev] Validation failed, unable to request certificate
I use :
Debian 8
Nginx
My firewall allow 443 & 80 (i have one site in http and one in https)
I have access to sudo (or root)
apt install ca-certificates curl openssh-server postfix
I try :
Create subdomaine gitlab.domain.dev in my dns
Create SSL cert. for this domain with certbot
At this step the subdomain is ok
Install gitlab whit EXTERNAL_URL="https://gitlab.domain.dev" apt-get install gitlab-ee
At this step gitlab.domain.dev return nothing
I test to edit the config file (nano /etc/gitlab/gitlab.rb) like this :
nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.domain.dev/fullchain.pem"
nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.domain.dev/privkey.pem"
and run gitlab-ctl reconfigure
and catch the error
i try this too
I don't understand why i said to tell gitlab to use my ssl certificates already created and how to make my subdomain give gitlab.
My nginx subdomain conf file :
# the nginx server instance
server {
server_name gitlab.domain.dev;
root /var/www/gitlab.domain.dev;
index index.html index.htm index.nginx-debian.html;
access_log /var/log/nginx/gitlab.domain.dev.log;
location / {
try_files $uri $uri/ =404;
}
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/gitlab.domain.dev/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/gitlab.domain.dev/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = gitlab.domain.dev) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name gitlab.domain.dev;
return 404; # managed by Certbot
}
update 1
I try :
convert .pem file to .key and .crt whit :
openssl x509 -outform der -in your-cert.pem -out your-cert.crt
openssl pkey -in privkey.pem -out foo.key
change value of gitlab config file nano /etc/gitlab/gitlab.rb to :
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.domain.dev.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.domain.dev.key"
reconfigure :
There was an error running gitlab-ctl reconfigure:
letsencrypt_certificate[gitlab.domain.dev] (letsencrypt::http_authorization line 5) had an error: RuntimeError: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 25) had an error: RuntimeError: ruby_block[create certificate for gitlab.domain.dev] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/acme/resources/certificate.rb line 108) had an error: RuntimeError: [gitlab.domain.dev] Validation failed, unable to request certificate
The setting letsencrypt['enable'] = false used to have a default value set to false but later versions of GitLab changed the setting to true as a default. The change could break your server script leading to ssl errors such as the one in question.
For my case, I had self signed certificates stored in a specific folder and the default set to true interfered with fetching of self assigned certs. The problem got solved once i changed the setting to false
Info from official GitLab Docs:
"Caution: GitLab 12.1 or later will attempt to renew any Let’s Encrypt certificate. If you plan to use your own Let’s Encrypt certificate you must set letsencrypt['enable'] = false in /etc/gitlab/gitlab.rb to disable integration. Otherwise the certificate could be overwritten due to the renewal."

How to change host for vue-cli hot reload endpoint (sockjs)?

What do I have:
vue-cli app running in virtual machine (vue --version 3.7.0)
Laravel Homestead v8.3.2
Vagrant 2.2.4
VirtualBox
Nginx
vue.config.js:
module.exports = {
devServer: {
host: 'myvueapp.local',
https: true
}
}
Nginx config:
server {
listen 80;
listen 443 ssl http2;
server_name .myvueapp.local;
root "/home/path/to/myvueapp.local/public";
index index.html index.htm;
charset utf-8;
location / {
try_files $uri $uri/ /index.html =404;
proxy_pass https://myvueapp.local:8080;
}
sendfile off;
}
npm run serve output:
Local: https://myvueapp.local:8080/
Network: https://myvueapp.local:8080/
What do I do:
I run npm run serve in my VM. I can access Vue app from my host machine by myvueapp.local in browser.
What's my problem:
Hot reload does not work. sockjs connection is calling not myvueapp.local but myvueapp.local:8080. So, I'm getting
https://myvueapp.local:8080/sockjs-node/info?t=
net::ERR_CONNECTION_REFUSED
You need a public property on your devServer, like this
Then, hot reloading will work in fallback (http polling) mode, but to properly get websockets working, you need to handle upgrade requests in your proxy server. Here is a script that solves the problem for express. You will need to port this to nginx. It's just the last part concerning upgrade requests that you're missing.

nginx in front of wordpress configuration causes "Request exceeded the limit of 10 internal redirects due to probable configuration error."

I have a site https://example.com running on instance 1 on AWS EC2:
nginx + wildfly app server
Everything works fine. Nginx proxy passes to wildfly. Https is configured.
I would like to set up wordpress running on different EC2 instance so it is accessible via https://example.com/blog
I set up dedicated instance for wordpress and launched wordpress using docker compose as described here: https://docs.docker.com/compose/wordpress/
I configured composer so wordpress is accessible via 80 port on that instance.
I configured ngnix as following:
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name <server_name>;
root /usr/share/nginx/html;
ssl_certificate "/etc/ssl/certs/<path_to_cert>/ssl-bundle.crt";
ssl_certificate_key "/etc/ssl/certs/<path_to_cert>/private.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP;
ssl_prefer_server_ciphers on;
include /etc/nginx/default.d/*.conf;
# main site, deployed on same instance as nginx
location / {
proxy_pass http://127.0.0.1:8080;
}
# wordpress, deployed on other instance
location /blog/ {
proxy_pass http://<my_wordpress_instance_local_ip>/;
proxy_set_header X-Forwarded-Proto $scheme;
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
I installed wordpress (using direct access to it) and configured WordPress Address (URL) and Site Address (URL) to be "https://example.com/blog" both.
Now I can access admin and home page https://example.com/blog fine. But when I click to see test "Hello World" post (url: https://example.com/blog/2018/09/28/hello-world/) I get 500 Internal Server error.
I looked at wordpress docker log and found following error:
"[Fri Sep 28 18:17:25.721177 2018] [core:error] [pid 308] [client :47792] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: https://example.com/blog/
"
How can I fix that?
Disclaimer: I have no big experience in nginx, not in wordpress, so sorry for maybe stupid question. I tried different options I found on the internet but all of them are either not working or describe manual wordpress setup (not via standard docker image).
UPDATES:
I updated description of nginx conf: now the whole configuration is provided;
My Wordpress Permalink setting is "Day and name". I tried all of them and found that if I switch to "Plain" - everything works with no error. All other settings cause error. Obviously "Plain" does not work me and I would like to proceed with something better than that.
I found that following answer fixes the issue:
https://stackoverflow.com/a/4521620/2033394
It will require to modify image though. So it is not best, but it makes everything to work fine.

xmlrpc attack on nginx configured with HTTPS redirection

I am trying to avoid nasty xmlrpc attack with the following configuration:
server {
listen 443 ssl default deferred;
server_name myserver.com;
...
}
server {
listen 80;
server_name myserver.com;
location /xmlrpc.php {
deny all;
access_log off;
log_not_found off;
return 444;
}
return 301 https://$host$request_uri;
}
Apparently the location block is not working, since requests to /xmlrpc.php get redirected as showed by the logs:
[02/Jun/2016:11:24:10 +0000] "POST /xmlrpc.php HTTP/1.0" 301 185 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)"
How can I discard all requests to /xmlrpc.php right away without having them redirected to HTTPS?
XMLRPC attack is a common attack that lets an attacker constantly calls your xmlrpc.php file with random authentication credentials.
You need to use utilities like Fail2Ban which is quite effective in banning and preventing your WordPress site against common xmlrpc attack.
First of all disable XMLRPC if you are not posting content from outside. Add the following line of code in your theme's function.php file
add_filter( 'xmlrpc_enabled', '__return_false' );
Add the following in your .conf file
location = /xmlrpc.php {
deny all;
access_log off;
log_not_found off;
}
Then, you need to install Fail2Ban on your server
apt-get install fail2ban iptables
or
yum install fail2ban
Post installation, you need to edit jail.conf file
vim /etc/fail2ban/jail.conf
Inside the jail.conf file add the following lines of code
[xmlrpc]
enabled = true
filter = xmlrpc
action = iptables[name=xmlrpc, port=http, protocol=tcp]
logpath = /var/log/apache2/access.log
bantime = 43600
maxretry = 2
This will read the access.log file (provide actual path of your access log) and looks for failed attempts. If it detects more than 2 failed attempts, the attackers IP address is added in your iptables.
Now, we have to create a filter for fail2ban. Type this in terminal
cd /etc/fail2ban/filter.d/
vim xmlrpc.conf
Inside this filter file paste the following definition
[Definition]
failregex = ^<HOST> .*POST .*xmlrpc\.php.*
ignoreregex =
Now, just restart the fail2ban service
service fail2ban restart or /etc/init.d/fail2ban/restart
See the log like this
tail -f /var/log/fail2ban.log
Also in your iptables you will constantly see lots of entries which must be seeing connection refused error.
watch iptables -L
to constantly monitor. It should immediately block xmlrpc attack and you should see lots of entries in your iptables.
If there are plugins which depends on XMLRPC, you can allow your own IP in the config file.
You can try it that way:
location ^~/xmlrpc.php {
deny all;
access_log off;
log_not_found off;
return 444;
}

Resources