Nginx SNI + OCSP stapling not working - nginx

I recently tried to setup OCSP on one of my nginx servers.
Unfortunately I couldn't get it to work and didn't find a solution so far.
The configuration looks like this:
ssl_certificate /etc/ssl/private/mysite.com/combined.pem;
ssl_certificate_key /etc/ssl/private/mysite.com/privkey.pem;
ssl_trusted_certificate /etc/ssl/private/mysite.com/fullchain.crt;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 valid=300s;
resolver_timeout 15s;
The fullchain.crt contains the servers cert, the intermediate and the root cert.
If I check these certs by hand with :
openssl ocsp -issuer intermediate.crt -CAfile fullchain.crt -cert cert.crt -url http://tm.symcd.com -no_nonce
it returns ok:
Response verify OK
cert.crt: good
This Update: Apr 7 11:26:10 2018 GMT
Next Update: Apr 14 11:26:10 2018 GMT
But checking the server with s_client from elsewhere always returns
OCSP response: no response sent
even after waiting several minutes and nginx always throws the error:
2018/04/09 12:59:06 [error] 9474#9474: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get issuer certificate) while requesting certificate status, responder: tm.symcd.com
The server uses SNI since it delivers multiple sites with different certificates.
Somebody got an idea what I am missing here?

In your openssl s_client command, you should try again by adding the SNI option (-servername) :
openssl s_client -connect <fqdn>:443 -servername <fqdn> -status -tlsextdebug -tls1_2 ...
I had the same issue et that worked for me.

Related

nginx: oscp stapling not performed on revoked cert

It seems nginx only performs OCSP stapling if the certificate is found to be not revoked.
Here is a TLS handshake with OCSP stapling enabled on nginx, with the server certificate revoked:
openssl s_client -connect helloworld.example.com:11443 -status -servername helloworld.example.com | grep OCSP
OCSP response: no response sent
As can be seen above, there is no OCSP response included.
Judging from the logs below, nginx does make an OCSP request to the OCSP server, it has understood that the certificate is revoked and creates an [error] log.
2023/01/26 14:35:55 [error] 29#29: certificate status "revoked" in the OCSP response while requesting certificate status, responder: ocsp, peer: 172.18.0.2:8888, certificate: "/etc/ssl/helloworld.example.com.crt"
o
In the OCSP server logs the request from nginx is also visible:
ocsp_1 | OCSP Request Data:
(...)
ocsp_1 | OCSP Response Data:
ocsp_1 | OCSP Response Status: successful (0x0)
ocsp_1 | Response Type: Basic OCSP Response
(...)
ocsp_1 | Cert Status: revoked
Here's the same nginx configuration, but with the certificate not revoked. In that case, everything works as expected and the client gets a stapled HTTP response.
depth=1 O = (...)
verify return:1
depth=0 CN = helloworld.example.com
verify return:1
OCSP response:
OCSP Response Data:
OCSP Response Status: successful (0x0)
Response Type: Basic OCSP Response
...
nginx itself doesn't log anything. The OSCP shows the request from nginx as well.
ocsp_1 | OCSP Request Data:
(...)
ocsp_1 | OCSP Response Data:
ocsp_1 | OCSP Response Status: successful (0x0)
ocsp_1 | Response Type: Basic OCSP Response
(...)
ocsp_1 | Cert Status: good
My conclusion is that nginx does not staple revoked certificates. This article seems to be in line with this observation:
https://mailman.nginx.org/pipermail/nginx/2014-April/043126.html
I would like OCSP stapling specifically for the case of revoked certificates. How can I get that to work or is this a design-choice of nginx that cannot be changed?

ffmpeg hls stream to nginx webdav. Remove old segments

I'm trying to stream mp4 file in loop to my nginx server. And i need to remove old segments:
ffmpeg -re -stream-loop -1 -i /data/samples/BigBuckBunny.mp4 -c copy -f hls -hls_time 5 -hls_flags delete_segments -hls_list_size 5 http://127.0.0.1:8080/upload/stream.m3u8
Everythink is ok, but when ffmpeg tries to remove old segment I've got this error in nginx:
[error] 22#22: *73174 DELETE with body is unsupported, client:
127.0.0.1, server: _, request: "DELETE /upload/stream16.ts HTTP/1.1", host: "127.0.0.1:8080"
My nginx config:
location /upload {
root /data/live;
dav_access user:rw group:rw all:rw;
dav_methods PUT DELETE MKCOL COPY MOVE;
create_full_put_path on;
charset utf-8;
autoindex on; }
ffmpeg 4.4.1
nginx 1.21.4
What I'm doing wrong?
It seems that the ffmpeg http muxer that underlies the hls muxer defaults to chunked transfer encoding. When the DELETE request is made there is no body, but ffmpeg still makes the request with a single zero-length chunk.
The error message in nginx could be slightly more helpful. Indeed, it does not support webdav DELETE requests with a body, but it also does not support DELETE requests which are marked as chunked transfer encoded, regardless of whether there is a body (see: https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_dav_module.c#L315), hence the error.
It looks like it should be possible to disabled this behaviour in ffmpeg using the chunked_post option, but it doesn't appear to be working. Not sure if this is a bug or not, or it seems like a bit of a hack anyway.

How do you redirect a bare "example.com" domain to "https://example.com"?

Title really says it all. I cannot get my domain to redirect from site.com to http://example.com but https://example.com works as well as https://www.example.com.
My nginx conf is as follows with sensitive paths removed. Here is a GIST link to all my nginx setup, it is currently the only enabled domain in my entire nginx configuration.
Console output on my local vs my server:
rublev#rublevs-MacBook-Pro ~
• curl -I rublev.io
^C
rublev#rublevs-MacBook-Pro ~
• ssh r
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-75-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
78 packages can be updated.
0 updates are security updates.
Last login: Fri May 12 16:41:35 2017 from 198.84.225.249
rublev#ubuntu-512mb-tor1-01:~$ curl -I rublev.io
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Fri, 12 May 2017 16:41:43 GMT
Content-Type: text/html
Content-Length: 339
Last-Modified: Thu, 20 Apr 2017 20:47:12 GMT
Connection: keep-alive
ETag: "58f91e50-153"
Accept-Ranges: bytes
I am at my wits end, I truly have no idea what to do now, I've spent weeks trying to get this working.
LetsEncrypt is using http by default, just to be safe, it's preferable to leave the ability for it to access the well-known for the acme challenge. You also don't seem to be pointing to the right path for it, unless you changed your webroot to /home/rublev/sites/rublev.io?
I would try to rewrite your default server like this instead of redirecting directly to your https equivalent. Besides, it will allow you to test more easily this strange behavior.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name rublev.io www.rublev.io;
location / {
return 301 https://$server_name$request_uri;
}
location ~ /.well-known {
# Apparently you changed your webroot,
# just make sure that the file is created and accessible
root /home/rublev/sites/rublev.io/.well-known;
}
}
Also, it's very important to know that you need either two different certificates or one that accept your domain both with and without www form. If that was not the case, that might very well be the reason of your issue. To generate a cert with both domains, you can run the following command:
sudo ./certbot-auto certonly --standalone -d rublev.io -d www.rublev.io --dry-run
I would also comment out your jenkins server and configs, just in case it's messing with your main one. Note: from this question, you can use proxy_redirect http:// $scheme://; instead of the form you're currently doing. It shouldn't affect another server, but I prefer to be sure in these weird scenarios.
Another thing, that might be the path to another solution, would be to consider picking only one form of your domain (either with or without www), and redirecting users from the "wrong" to the "right" one. This will allow you to get more consistent urls, better for SEO and people sharing your links.
Add the server
server {
server_name example.com;
return 301 https://example.com$request_uri;
}

Will I be able to use CURL to get HTTP/2 headers?

Right now I use curl -I to retrieve headers.
Will sites adopt a different way of serving headers with HPACK in the upcoming adoption of HTTP/2 by browsers that will render my use of the curl command ineffective?
Yes, you can use curl to see and send HTTP headers with HTTP/2 just as you do with HTTP/1.
curl supports HTTP/2 and it is implemented as a sort of translation layer. It means it shows and "pretends" that headers work 1.1 style. It shows headers as text and it sends headers in callbacks like they were done with 1.1. We made it this way to make scripts and applications get a very smooth and basically invisible transition path to HTTP/2 with curl.
Internally that is of course done by decompressing received headers before showing them, and showing them before compressing them when sending them.
I believe it depends on curl version. HTTP/2 was added in curl 7.36.x IIRC ? not all distros would have that version ?
This is with curl 7.41.0 over HTTP/2 against https://google.com
curl --http2 -I -v https://google.com
* Rebuilt URL to: https://google.com/
* Trying 173.194.123.1...
* Connected to google.com (173.194.123.1) port 443 (#0)
* ALPN, offering h2-14, http/1.1
* ALPN, server accepted to use h2-14
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.google.com
* start date: 2015-03-11 16:13:43 GMT
* expire date: 2015-06-09 00:00:00 GMT
* subjectAltName: google.com matched
* issuer: C=US; O=Google Inc; CN=Google Internet Authority G2
* SSL certificate verify ok.
* Using HTTP2
edit: correction, curl --http2 needs nghttp2 compiled for it to work https://nghttp2.org/
curl --version
curl 7.41.0 (x86_64-unknown-linux-gnu) libcurl/7.41.0 OpenSSL/1.0.2b zlib/1.2.8 nghttp2/0.7.8-DEV
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets

NGINX + uWSGI Connection Reset by Peer

I'm trying to host Bottle Application on NGINX using uWSGI.
Here's my nginx.conf
location /myapp/ {
include uwsgi_params;
uwsgi_param X-Real-IP $remote_addr;
uwsgi_param Host $http_host;
uwsgi_param UWSGI_SCRIPT myapp;
uwsgi_pass 127.0.0.1:8080;
}
I'm running uwsgi as this
uwsgi --enable-threads --socket :8080 --plugin python -- wsgi-file ./myApp/myapp.py
I'm using POST Request. For that using dev Http Client. Which goes infinite when I send the request
http://localhost/myapp
uWSGI server receives the request and prints
[pid: 4683|app: 0|req: 1/1] 127.0.0.1 () {50 vars in 806 bytes} [Thu Oct 25 12:29:36 2012] POST /myapp => generated 737 bytes in 11 msecs (HTTP/1.1 404) 2 headers in 87 bytes (1 switches on core 0)
but in nginx error log
2012/10/25 12:20:16 [error] 4364#0: *11 readv() failed (104: Connection reset by peer) while reading upstream, client: 127.0.0.1, server: localhost, request: "POST /myApp/myapp/ HTTP/1.1", upstream: "uwsgi://127.0.0.1:8080", host: "localhost"
What to do?
make sure to consume your post data in your application
for example if you have a Django/python application
def my_view(request):
# ensure to read the post data, even if you don't need it
# without this you get a: failed (104: Connection reset by peer)
data = request.DATA
return HttpResponse("Hello World")
Some details: https://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html
You cannot post data from the client without reading it in your application. while this is not a problem in uWSGI, nginx will fail. You can 'fake' the thing using the --post-buffering option of uWSGI to automatically read datas from the socket (if available), but you'd better to "fix" (even if i do not consider that a bug) your app
This problem occurs when the body of a request is not consumed, since uwsgi cannot know whether it will still be needed at some point. So uwsgi will keep holding on to the data either until it is consumed or until nginx resets the connection (because upstream timed out).
The author of uwsgi explains it here:
08:21 < unbit> plaes: does your DELETE request (not-response) have a body ?
08:40 < unbit> and do you read that body in your app ?
08:41 < unbit> from the nginx logs it looks like it has a body and you are not reading it in the app
08:43 < plaes> so DELETE request shouldn't have the body?
08:43 < unbit> no i mean if a request has a body you have to read/consume it
08:44 < unbit> otherwise the socket will be clobbered
So to fix this you need to make sure to always either read the whole request body or not to send a body if it is not necessary (for a DELETE e.g.).
Not use threads!
I have same problem with Global Interpretator Lock in Python under uwsgi.
When i don't use threads- not connection reset.
Example of uwsgi config ( 1Gb Ram on server)
[root#mail uwsgi]# cat myproj_config.yaml
uwsgi:
print: Myproject Configuration Started
socket: /var/tmp/myproject_uwsgi.sock
pythonpath: /sites/myproject/myproj
env: DJANGO_SETTINGS_MODULE=settings
module: wsgi
chdir: /sites/myproject/myproj
daemonize: /sites/myproject/log/uwsgi.log
max-requests: 4000
buffer-size: 32768
harakiri: 30
harakiri-verbose: true
reload-mercy: 8
vacuum: true
master: 1
post-buffering: 8192
processes: 4
no-orphans: 1
touch-reload: /sites/myproject/log/uwsgi
post-buffering: 8192

Resources