Nginx -Gitlab: could not clone over https - nginx

I starting setting up a gitlab-runner, before I only try to clone, pull, push, etc. over ssh. With ssh it was no problem, so I think it is a problem with nginx. I try some settings in nginx, but not clearly sure what will be need. Do anybody know what to set, to get data? Website is also running fine.
The nginx output while cloning git repo ci https:
172.17.0.1 - - [20/Jul/2017:21:13:39 +0000] "GET /server/nginx.git/info/refs?service=git-upload-pack HTTP/1.1" 401 26 "-" "git/2.7.4"
172.17.0.1 - user [20/Jul/2017:21:13:39 +0000] "GET /server/nginx.git/info/refs?service=git-upload-pack HTTP/1.1" 401 26 "-" "git/2.7.4"
172.17.0.1 - - [20/Jul/2017:21:13:42 +0000] "POST /heartbeat HTTP/1.1" 200 5 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:54.0) Gecko/20100101 Firefox/54.0"
172.17.0.1 - - [20/Jul/2017:21:13:46 +0000] "GET /ocs/v2.php/apps/notifications/api/v2/notifications HTTP/1.1" 200 74 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:54.0) Gecko/20100101 Firefox/54.0"
172.17.0.1 - user [20/Jul/2017:21:13:47 +0000] "GET /server/nginx.git/info/refs?service=git-upload-pack HTTP/1.1" 200 415 "-" "git/2.7.4"
172.17.0.1 - user [20/Jul/2017:21:13:47 +0000] "POST /server/nginx.git/git-upload-pack HTTP/1.1" 500 0 "-" "git/2.7.4"
git response:
error: RPC failed; HTTP 500 curl 22 The requested URL returned error: 500 Internal Server Error
fatal: The remote end hung up unexpectedly
git workhorse error message
2017-07-22_11:19:45.43536 2017/07/22 11:19:45 error: POST "/server/nginx.git/git-upload-pack": handleUploadPack: ReadAllTempfile: open /tmp/gitlab-workhorse-read-all-tempfile358528589: permission denied
2017-07-22_11:19:45.43551 git.dropanote.de 172.10.11.97:43758 - - [2017-07-22 11:19:45.349933226 +0000 UTC] "POST /server/nginx.git/git-upload-pack HTTP/1.1" 500 0 "" "git/2.7.4" 0.085399
nginx config
## GitLab
##
## Modified from nginx http version
## Modified from http://blog.phusion.nl/2012/04/21/tutorial-setting-up-gitlab-on-debian-6/
## Modified from https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
##
## Lines starting with two hashes (##) are comments with information.
## Lines starting with one hash (#) are configuration parameters that can be uncommented.
##
##################################
## CONTRIBUTING ##
##################################
##
## If you change this file in a Merge Request, please also create
## a Merge Request on https://gitlab.com/gitlab-org/omnibus-gitlab/merge_requests
##
###################################
## configuration ##
###################################
##
## See installation.md#using-https for additional HTTPS configuration details.
upstream gitlab-workhorse {
server 172.10.11.66:8181;
keepalive 32;
}
## Redirects all HTTP traffic to the HTTPS host
server {
## Either remove "default_server" from the listen line below,
## or delete the /etc/nginx/sites-enabled/default file. This will cause gitlab
## to be served if you visit any address that your server responds to, eg.
## the ip address of the server (http://x.x.x.x/)
listen 0.0.0.0:80;
listen [::]:80 ipv6only=on;
server_name url.tdl; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
location /.well-known/acme-challenge {
root /tmp;
}
location / {
return 301 https://$http_host$request_uri;
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
}
}
## HTTPS host
server {
listen 0.0.0.0:443 ssl;
listen [::]:443 ipv6only=on ssl;
server_name url.tdl; ## Replace this with something like gitlab.example.com
server_tokens off; ## Don't show the nginx version number, a security best practice
root /opt/gitlab/embedded/service/gitlab-rails/public;
## Strong SSL Security
## https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html & https://cipherli.st/
ssl on;
ssl_certificate linkto/fullchain.pem;
ssl_certificate_key linkto/privkey.pem;
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains" always;
# GitLab needs backwards compatible ciphers to retain compatibility with Java IDEs
ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
location / {
client_max_body_size 0;
gzip off;
## https://github.com/gitlabhq/gitlabhq/issues/694
## Some requests take more than 30 seconds.
proxy_read_timeout 3000;
proxy_connect_timeout 3000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://gitlab-workhorse;
}
}

Make sure you're running gitlab-workhorse, check your /etc/gitlab/gitlab.rb with this lines uncommented:
gitlab_workhorse['enable'] = true
gitlab_workhorse['listen_network'] = "tcp"
gitlab_workhorse['listen_addr'] = "127.0.0.1:8181"
then run
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart
...
ok: run: gitlab-workhorse: ...
Seems like everything is ok with your nginx.conf for me.

I get it solved the problem by remove the tmp folder from mapped docker folder. Before I mapped the tmp folder to my host systems. I do not know why, but gitlab got a problem to write to this folder and that seems to be the problem of http connection.

With Git 2.35 (Q1 2022), "git upload-pack"(man) (the other side of "git fetch"(man)) should be more robust.
It used a 8kB buffer but most of its payload came on 64kB "packets".
The buffer size has been enlarged so that such a packet fits.
This is a contribution directly from GitLab staff (Jacob Vosmaer).
See commit 55a9651 (14 Dec 2021) by Jacob Vosmaer (jacobvosmaer).
(Merged by Junio C Hamano -- gitster -- in commit d9fc3a9, 05 Jan 2022)
upload-pack.c: increase output buffer size
Signed-off-by: Jacob Vosmaer
When serving a fetch, git upload-pack(man) copies data from a git pack-objects(man) stdout pipe to its stdout.
This commit increases the size of the buffer used for that copying from 8192 to 65515, the maximum sideband-64k packet size.
Previously, this buffer was allocated on the stack.
Because the new buffer size is nearly 64KB, we switch this to a heap allocation.
On GitLab.com we use GitLab's pack-objects cache which does writes of 65515 bytes.
Because of the default 8KB buffer size, propagating these cache writes requires 8 pipe reads and 8 pipe writes from git-upload-pack, and 8 pipe reads from Gitaly (our Git RPC service).
If we increase the size of the buffer to the maximum Git packet size, we need only 1 pipe read and 1 pipe write in git-upload-pack, and 1 pipe read in Gitaly to transfer the same amount of data.
In benchmarks with a pure fetch and 100% cache hit rate workload we are seeing CPU utilization reductions of over 30%.

Related

Plumber POST API under an Nginx reverse proxy returns a 404 error

There seems to be quite a bit floating around the internet about problems such as this, but nothing I have found and tried quite pertains to my particular issue.
I had a functional Plumber api working in Nginx on Digital Ocean, alas when I installed php 8.0.2 and upgraded Ubuntu to 22.04 (and overwrote my conf files then reconfigured them!), it ceased to work. I can see that my R pid is listening to port 3000, and myApi-plumber.service is also directed at port 3000, yet when I test http://127.0.0.1:3000 as root user in console it returns a 404 error instead of an expected 405 (I have provided all info below).
I have not altered any settings in my plumber.R file since it was functional, and it still works perfectly on my local machine, which leads me to suspect it's a server configuration.
I would think I've likely done something incorrectly since, but I've spent days on this and cannot conceive of what it might be. I have adhered to consistent trailing slashes, restarted nginx and rebooted the droplet. I have since spun up new droplets and reinstalled everything from scratch too, but nothing seems to be working. My firewall is also configured as it should be.
Here are my settings:
Nginx version 1.18.0
/etc/nginx/sites-available/default
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.php index.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location /myApi/ {
proxy_pass http://127.0.0.1:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
}'
curl -i http://127.0.0.1:3000
(Here I would expect a '405 - Method not allowed' for a POST API with no JSON content being passed to it)
HTTP/1.1 404 Not Found
Date: Fri, 27 Jan 2023 13:11:52 GMT
Access-Control-Allow-Origin: *
Content-Type: application/json
Content-Length: 36
Webpage form POST request
(Here I would expect a JSON object to get passed to the plumber API and run through an R script which compiles a file from server-side files and becomes available as a link on the page.)
function foo(myCallback) {
$.ajax({
type: "POST",
url: "http://XX.XX.XX.XX/myAPI/",
data: myAPIString,
dataType: "json",
success: myCallback,
})
Returns a 404 xhr error in the network tab of the console in the browser.
sudo lsof -i -P -n | grep LISTEN
R 717 root 15u IPv4 XXXXX 0t0 TCP 127.0.0.1:3000
plumber-API.service - Plumber API
Loaded: loaded (/etc/systemd/system/plumber-API.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2023-01-27 10:40:03 UTC; 2h 45min ago
Main PID: 717 (R)
Tasks: 4 (limit: 1131)
Memory: 210.8M
CGroup: /system.slice/plumber-myTree.service
└─717 /usr/lib/R/bin/exec/R --no-echo --no-restore -e pr~+~<-~+~plumber::pr('/var/plumber/myApi/plumber.R');~+~pr$setDocs(FALSE);~+~~+~pr$run(port=3000)
Jan 27 10:40:03 ShakenPolygamy systemd[1]: Started Plumber API.
Jan 27 10:40:10 ShakenPolygamy Rscript[717]: Loading required package: maps
Jan 27 10:40:13 ShakenPolygamy Rscript[717]: Running plumber API at http://127.0.0.1:3000
Very frustrating indeed. Any help is so much appreciated!

NGINX Thumbnail Generation won't work with spaces or %20 in Ubuntu 22 but did in Ubuntu 16 - Nginx 1.23.0 specific

Please see update at the bottom regarding the Nginx version 1.23.0 being the cause
I've been using NGINX to generate image thumbnails for a number of years, however having just switched from Ubuntu 16 to Ubuntu 22 and Nginx 1.16 to nginx 1.23 it no longer works with paths that include spaces.
It is very likely that this is a configuration difference rather than something to do with the different versions, however as far as I can tell the NGINX configs are identical so possibly it is something to do with different Ubuntu/Nginx versions.
The error given when accessing a URL with a space in the path is simply "400 Bad Request".
There are no references to the request in either the access or error logs after 400 Bad Request is returned.
The nginx site config looks like this and was originally based off this guide
server {
server_name localhost;
listen 8888;
access_log /home/mysite/logs/nginx-thumbnails-localhost-access.log;
error_log /home/mysite/logs/nginx-thumbnails-localhost-error.log error;
location ~ "^/width/(?<width>\d+)/(?<image>.+)$" {
alias /home/mysite/$image;
image_filter resize $width -;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
location ~ "^/height/(?<height>\d+)/(?<image>.+)$" {
alias /home/mysite/$image;
image_filter resize - $height;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.*)$" {
alias /home/mysite/$image;
image_filter resize $width $height;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
location ~ "^/crop/(?<width>\d+)/(?<height>\d+)/(?<image>.*)$" {
alias /home/mysite/$image;
image_filter crop $width $height;
image_filter_jpeg_quality 95;
image_filter_buffer 8M;
}
}
proxy_cache_path /tmp/nginx-thumbnails-cache/ levels=1:2 keys_zone=thumbnails:10m inactive=24h max_size=1000m;
server {
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name thumbnails.mysite.net;
ssl_certificate /etc/letsencrypt/live/thumbnails.mysite.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/thumbnails.mysite.net/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
access_log /home/mysite/logs/nginx-thumbnails-access.log;
error_log /home/mysite/logs/nginx-thumbnails-error.log error;
location ~ "^/width/(?<width>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/width/$width/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location ~ "^/height/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/height/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/resize/$width/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location ~ "^/crop/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass http://localhost:8888/crop/$width/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
location /media {
# Nginx needs you to manually define DNS resolution when using
# variables in proxy_pass. Creating this dummy location avoids that.
# The error is: "no resolver defined to resolve localhost".
proxy_pass http://localhost:8888/;
}
}
I don't know if it's related but nginx-thumbnails-error.log also regularly has the following line, although testing the response in browser seems to work:
2022/07/19 12:02:28 [error] 1058111#1058111: *397008 connect() failed
(111: Connection refused) while connecting to upstream, client:
????, server: thumbnails.mysite.net, request:
"GET
/resize/100/100/path/to/my/file.png
HTTP/2.0", upstream:
"http://[::1]:8888/resize/100/100/path/to/my/file.png",
host: "thumbnails.mysite.net"
This error does not appear when accessing a file with a space in it.
There are no references to the request for a file with a space in the path in nginx-thumbnails-access.log or nginx-thumbnails-error.log.
But there is an entry in the access log for localhost nginx-thumbnails-localhost-access.log
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:37:13 +0000] "GET /resize/200/200/test
dir/KPjjCTl0lnpJcUdQIWaPflzAEzgN25gRMfAH5qiI.png HTTP/1.0" 400 150 "-"
"-"
When a path has no spaces there is an entry in both nginx-thumbnails-localhost-access.log and nginx-thumbnails-access.log
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:43:48 +0000] "GET
/resize/200/202/testdir/KPjjCTl0lnpJcUdQIWaPflzAEzgN25gRMfAH5qiI.png
HTTP/1.0" 200 11654 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X
10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0
Safari/537.36"
./nginx-thumbnails-access.log:185.236.155.232 - -
[29/Jul/2022:10:43:48 +0000] "GET
/resize/200/202/testdir/KPjjCTl0lnpJcUdQIWaPflzAEzgN25gRMfAH5qiI.png
HTTP/2.0" 200 11654 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X
10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0
Safari/537.36"
I've no idea if it's relevant, but access log entries for images with a space in the name do not include the browser user agent.
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:52:16 +0000] "GET /resize/200/202/testdir/thumb
image.png HTTP/1.0" 400 150 "-" "-"
./nginx-thumbnails-localhost-access.log:127.0.0.1 - -
[29/Jul/2022:10:52:33 +0000] "GET
/resize/200/202/testdir/thumbimage.png HTTP/1.0" 200 11654 "-"
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
As requested here is a result from curl -i
HTTP/2 400
server: nginx
date: Thu, 28 Jul 2022 15:44:13 GMT
content-type: text/html
content-length: 150
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
I can now confirm that this is specific to Nginx 1.23.0 which is not yet stable.
I created a new Digital Ocean Droplet installed Nginx and setup the thumbnail server and it worked perfectly. The default nginx installation for Ubuntu 22 was 1.18.0.
I then upgraded to 1.23.0 as I had done on my live server via:
apt-add-repository ppa:ondrej/nginx-mainline -y
apt install nginx
The thumbnail server then stopped working with the same issue as my original issue.
I am now investigating downgrading nginx.
Downgrading Nginx to 1.18.0 worked, using these steps:
apt-add-repository --remove ppa:ondrej/nginx-mainline
apt autoremove nginx
apt update
apt install nginx
For some reason on one server I also had to run apt autoremove nginx-core but not on the other.
However i'm a little concerned that 1.18.0 is marked as no longer receiving security support. But I could not find an easy way to install 1.22.0 and test, only 1.18.0: https://endoflife.date/nginx
The first thing you could try is rewrite regex replacement [flag]; directive which in the code below will look for spaces or %20 and split the file name into two, the space or the %20 will be removed, the file name will be rewritten without them, and will be redirected to the new URL.
rewrite ^(.*)(\s|%20)(.*)$ $1$3 permanent;
Another possible solution is you probably have to encode the string in order to handle the spaces in the file names. To try that on your code:
location ~ "^/crop/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
if ( (?<image>.+)$ ~ (.*)%20(.*) ) {
set (?<image>.+)$ $1$3;
} if ( (?<image>.+)$ ~ (.*)\s(.*) ) {
set (?<image>.+)$ $1$3;
}
(?<image>.+);
}
But tbh I haven't tried the second one myself.
The error above seems to be unrelated but someone has posted a fix to a similar error.
I figured out that this error was specific to Nginx version 1.23.0 and downgrading to 1.18.0 resolved the issue.
Downgrading Nginx to 1.18.0 worked, using these steps:
apt-add-repository --remove ppa:ondrej/nginx-mainline
apt autoremove nginx
apt update apt install nginx
For some reason on one server I also had to run apt autoremove nginx-core but not on the other.
I am still investigating whether 1.22.0 is okay and how to report this error to Nginx directly.
So thanks to this answer on the Nginx Bug Tracker: https://trac.nginx.org/nginx/ticket/1930
The solution was actually very simply to remove the URI components
From this:
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass localhost:8888/resize/$width/$height/$image;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
To this:
location ~ "^/resize/(?<width>\d+)/(?<height>\d+)/(?<image>.+)$" {
# Proxy to internal image resizing server.
proxy_pass localhost:8888;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
And then it works.
This is the full explanation from #Maxim Dounin at the bug tracker:
That's because spaces are not allowed in request URIs, and since nginx 1.21.1 these are rejected. Quoting ​CHANGES:
*) Change: now nginx always returns an error if spaces or control characters are used in the request line.
See ticket #196 for details.
Your configuration results in incorrect URIs being used during proxying, as it uses named captures from ​$uri, which is unescaped, in ​proxy_pass, which expects all variables to be properly escaped if used with variables.
As far as I can see, in your configuration the most simple fix would be to change all proxy_pass directives to don't use any URI components, that is:
location ~ "/width/(?<width>\d+)/(?<image>.+)$" {
proxy_pass http://localhost:8888;
proxy_cache thumbnails;
proxy_cache_valid 200 24h;
}
This way, nginx will pass URIs from the client request unmodified (and properly escaped), and these match URIs you've been trying to reconstruct with variables.

nginx SSL (SSL: error:14201044:SSL routines:tls_choose_sigalg:internal error)

I've search a bunch of questions to set the correct configuration for nginx SSL, but my EC2 website isn't online. Actually when It was only HTTP protocol (80) it was working fine.
Steps I made
1 - Set security group for ec2 opening traffic for all ipv4 to access 443 and 80 (ok)
2 - Set /etc/nginx/sites-avaiable and /etc/nginx/sites-eneabled for only HTTP access, that was working fine (ok)
3 - Now started SSL process, creating crypto keys sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/nginx-selfsigned.key -out /etc/nginx/nginx-selfsigned.crt (ok)
4 - Now I modified 'default' file for both /etc/nginx/sites-avaiable and /etc/nginx/sites-eneabled to apply SSL on my website (???)
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name ec2-23-22-52-143.compute-1.amazonaws.com www.ec2-23-22-52-143.compute-1.amazonaws.com;
#Importing ssl
ssl_certificate /etc/nginx/nginx-selfsigned.crt;
ssl_certicate_key /etc/nginx/nginx-selfsigned.key;
# front-end
location / {
root /var/www/html;
try_files $uri /index.html;
}
# node api
location /api/ {
proxy_pass http://localhost:3000/;
}
}
server {
listen 80;
listen [::]:80;
server_name ec2-23-22-52-143.compute-1.amazonaws.com www.ec2-23-22-52-143.compute-1.amazonaws.com;
return 301 https://$server_name$request_uri;
}
5 - Tested configuration sudo nginx -t and it's a ok configuration (ok)
6 - Restarted nginx sudo systemctl restart nginx (ok)
7 - Tested if the necessary ports are being listening sudo netstat -plant | grep 80 sudo netstat -plant | grep 443 and both are being listening (ok)
8 - I should work everything looks great, so I tried to enter to website and for my surprise it's offline with error "ERR_CONNECTION_CLOSED"
https://ec2-23-22-52-143.compute-1.amazonaws.com/
9 - The unique thing that rest to check is the nginx error logs at /var/log/nginx/ , and there are this ERROR related to SSL
2022/04/07 19:24:25 [crit] 2453#2453: *77 SSL_do_handshake() failed (SSL: error:14201044:SSL routines:tls_choose_sigalg:internal error) while SSL handshaking, client: 45.56.107.29, server: 0.0.0.0:443
Conclusion
I don't why SSL_do_handshake() failed what I can do to fix this issue, anyone has a guess to solve this problem. Thanks a lot for stackoverflow comunnity you are great !!!

Debugging 404 error with nginx+uwsgi+flask on Ubuntu

I'm looking for help for what I assume is a configuration error with nginx...
My uwsgi .ini file is located at /var/www/mysite.com/public_html/api/calc:
[uwsgi]
#application's base folder
base = /var/www/mysite.com/public_html/api/calc
#python module to import
app = hello
module = %(app)
home = %(base)/venv
pythonpath = %(base)
#socket file's location
socket = /var/www/mysite.com/public_html/api/calc/%n.sock
#permissions for the socket file
chmod-socket = 666
#the variable that holds a flask application inside the module imported at line #6
callable = app
#location of log files
logto = /var/log/uwsgi/%n.log
My nginx configuration is using letsencrypt/certbot to serve two domains. The mysite.com domain is also set up to serve an api subdomain. The relevant nginx server block is:
server {
server_name api.mysite.com;
root /var/www/mysite.com/public_html/api;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
location /calc/ {
include uwsgi_params;
uwsgi_pass unix:/var/www/mysite.com/public_html/api/calc/calc_uwsgi.sock;
}
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
I'm trying to create an API located at api.mysite.com/calc/ by adding a location block that specifies the uwsgi gateway defined in /var/www/mysite.com/public_html/api/calc.
With uwsgi running, when I access https://api.mysite.com/calc, the uwsgi log file shows that my python is being invoked and is generating response data:
[pid: 34683|app: 0|req: 24/24] 192.168.1.1 () {56 vars in 1127 bytes} [Sun Feb 21 18:26:01 2021] GET /calc/ => generated 232 bytes in 2 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
The nginx access.log shows the access:
192.168.1.1 - - [21/Feb/2021:18:50:33 -0500] "GET /calc/ HTTP/1.1" 404 209 "-" "Mozilla/5.0 (X11; CrOS x86_64 13505.73.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.109 Safari/537.36"
But the client gets back a 404 not found error. There is no entry in the nginx error.log file.
(Obviously) I don't know what I'm doing. I've tried a bunch of different things in the nginx configuration, but no love.
Any suggestions appreciated!
Update
I had assumed that with this configuration, a request to app.mysite.com/calc would be served by the "/" route in my python, but it appears that it is actually served by the "/calc/" route:
#app.route("/")
def hello():
return "Hello World - '/' route "
#app.route("/calc/")
def hellocalc():
return "Hello world - '/calc/' route "
Is there a way to configure this so that my python doesn't need to include 'calc' in all of its routes?
I got this working using the internal routing route-uri directive
[uwsgi]
#try rewrite routing
route-uri = ^/calc/?(.*)$ rewrite:/
This necessitated reinstalling uwsgi after installing the pcre:
sudo apt-get install libpcre3 libpcre
pip3 install -I --no-cache-dir uwsgi
But here's another question: I did not install uwsgi with sudo, so now it lives in
/home/me/.local/bin/uwsgi
Is this good or bad?

ERR_EMPTY_RESPONSE with Ghost 0.7.5, Nginx 1.9, HTTPS and HTTP/2

Problem
When I hit kevinsuttle.com, I get
"No data received ERR_EMPTY_RESPONSE".
When I hit https://kevinsuttle.com, I get the site.
ghost 0.7.5
nginx 1.4 => 1.9.9
letsencrypt 0.2.0
Digital Ocean: Ubuntu 14.04 Ghost 1-click droplet
Under Networking > Domains, I have both kevinsuttle.com and www.kevinsuttle.com as A records pointing to the server's IP address (#).
DNSimple records
| Type | Name | TTL | Content |
|------ |--------------------- |--------------- |------------------------ |
| URL | www.kevinsuttle.com | 3600 (1 hour) | http://kevinsuttle.com |
The only modified portion in Ghost's config.js is my domain.
url: 'http://kevinsuttle.com',
Nginx 1.9
nginx 1.9 doesn't create the following directories by default:
/etc/nginx/sites-available
/etc/nginx/sites-enabled
and the usual default conf isn't created in either of those directories.
Instead, there is a etc/nginx/conf.d/default.conf and the important one, etc/nginx/conf.d/nginx.conf. You'll see a lot of tutorials telling you to delete the default.conf, which seems to be fine, but whatever you do, do NOT delete nginx.conf.
Also, you should move/create your ghost.conf into the /etc/nginx/conf.d/ directory. That's what fixed one of my problems, because the last line in etc/nginx/conf.d/nginx.conf looks in the /conf.d/ directory an includes any files there: include /etc/nginx/conf.d/*.conf;
Here's my /etc/nginx/conf.d/ghost.conf file:
server {
root /usr/share/nginx/html;
index index.html index.htm;
listen 443 ssl http2;
server_name kevinsuttle.com www.kevinsuttle.com;
ssl_certificate /etc/letsencrypt/live/kevinsuttle.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/kevinsuttle.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location ~ /.well-known {
allow all;
root /var/www/;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:2368;
proxy_redirect off;
root /var/www/;
}
location /.well-known/ {
root /var/www/;
}
}
server {
listen 80 ssl http2;
server_name kevinsuttle.com;
return 301 https://$host$request_uri;
}
Now, I had this all working fine, and tried to upgrade nginx to 1.9+, in order to serve via http/2. DigitalOcean's 1-click Ghost droplet defaults to using nginx 1.4.
Long story short, I kept getting this error:
dpkg: error processing archive /var/cache/apt/archives/nginx_1.9.9-1~trusty_amd64.deb (--unpack):
and the only solution I found was
apt-get purge nginx nginx-common
I was then able to install nginx 1.9, by adding the following lines to my /etc/apt/source.list file.
deb http://nginx.org/packages/mainline/ubuntu/ trusty nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ trusty nginx
Now I simply added listen 80 ssl http2; and listen 443 ssl http2;, and http/2 works fine. But only when the https:// URL is explicitly entered.
I found some evidence that points to the fact that express doesn't support http/2, but I'm not 100% on that.
Any ideas would be much appreciated.
I don't think this is a problem with Ghost, nor with the DNS.
You can exclude the DNS because both www and not-www are resolving to the configured IP.
➜ ~ dig kevinsuttle.com +short
162.243.4.120
➜ ~ dig www.kevinsuttle.com +short
162.243.4.120
The DNS protocol works at lowel level than HTTP and it doesn't care about the HTTP version or whether you use HTTP vs HTTPS. Therefore we can exclude the DNS and move to examine the higher level protocols.
I also exclude the issue is Ghost/Express, because I can send an HTTP/2 request to your blog when I use HTTPS.
➜ ~ curl --http2 -I https://kevinsuttle.com/
HTTP/2.0 200
server:nginx/1.9.9
date:Sun, 24 Jan 2016 19:34:30 GMT
content-type:text/html; charset=utf-8
content-length:13594
x-powered-by:Express
cache-control:public, max-age=0
etag:W/"351a-fflrj9kHHJyvRRSahEc8JQ"
vary:Accept-Encoding
I can also fallback to HTTP 1.1, as long as I use the HTTP version of the site.
➜ ~ curl -I https://kevinsuttle.com/
HTTP/1.1 200 OK
Server: nginx/1.9.9
Date: Sun, 24 Jan 2016 19:35:36 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 13594
Connection: keep-alive
X-Powered-By: Express
Cache-Control: public, max-age=0
ETag: W/"351a-fflrj9kHHJyvRRSahEc8JQ"
Vary: Accept-Encoding
Hence, the problem is the Nginx configuration. Specifically, the problem is the Nginx configuration for the HTTP-only block.
I can't try it right now, but personally I believe the problem is this line:
listen 80 ssl http2;
it should be
listen 80;
The ssl directive is used to enforce the listening socket to understand ssl. However, in your case it doesn't make sense to have the socket listening on 80 to use HTTPS. Moreover, the socket that uses ssl must have an associated SSL configuration declared (aka at least a valid certificate and key).
Generally, you use ssl to configure a single server that handles both HTTP and HTTPS requests:
server {
listen 80;
listen 443 ssl;
server_name www.example.com;
ssl_certificate www.example.com.crt;
ssl_certificate_key www.example.com.key;
...
}
Also note that as explained in the Nginx documentation
The use of the ssl directive in modern versions is thus discouraged.
The use of http2 in combination with a non-https socket can also be the cause of trouble.
Quoting this article:
While the spec doesn’t force anyone to implement HTTP/2 over TLS but allows you to do it over clear text TCP, representatives from both the Firefox and the Chrome development teams have expressed their intents to only implement HTTP/2 over TLS. This means HTTPS:// URLs are the only ones that will enable HTTP/2 for these browsers.
Therefore, assuming it would be possible, serving the non-https site via HTTP/2 may not be useful. Actually, I doubt it's even possible as of today, given the issue described in this ticket which seems to match your issue.
To summarize, simply change
server {
listen 80 ssl http2;
server_name kevinsuttle.com;
return 301 https://$host$request_uri;
}
to
server {
listen 80;
server_name kevinsuttle.com;
return 301 https://$host$request_uri;
}

Resources