I'm trying to connect to the OneDrive API (https://api.onedrive.com) using curl, but it throws me
curl: (60) SSL certificate problem: unable to get local issuer certificate
The page opens in Firefox without a certificate warning. I checked the certificate info, put all three certificates (Baltimore CyberTrust Root, Microsoft IT SSL SHA2 and storage.live.com) into /usr/share/ca-certificates and installed them using
dpkg-reconfigure ca-certificates
However, I keep getting this error. Any ideas?
me#host:~$ curl -vi https://api.onedrive.com
* Rebuilt URL to: https://api.onedrive.com/
* Hostname was NOT found in DNS cache
* Trying 134.170.109.152...
* Connected to api.onedrive.com (134.170.109.152) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs/
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS alert, Server hello (2):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
me#host:~$ curl -V
curl 7.38.0 (x86_64-pc-linux-gnu) libcurl/7.38.0 OpenSSL/1.0.1k zlib/1.2.8 libidn/1.29 libssh2/1.4.3 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API SPNEGO NTLM NTLM_WB SSL libz TLS-SRP
Related
I want to disable TLS 1.0 and TLS 1.1 from my website.
The website is hosted on the google cloud platform Kubernetes engine.
I used this Nginx ingress https://cloud.google.com/community/tutorials/nginx-ingress-gke
And for the SSL certificate, I used cert-manager from this tutorial https://youtu.be/hoLUigg4V18
I don't get where I should do the change. Should it be done from:
ingress YAML file
cert-manager
load balancer on GCP
I tried to create an SSL policy on GCP but I wasn't able to add a target because it should be a GCE ingress, not Nginx (I have to use Nginx due to lack of required metadata in GCE)
I also tried creating a config map file but still, they are enabled.
It seems that default nginx-ingress defaults to using TLS 1.2 and 1.3 only, please check documentation about Nginx Ingress.
You can verify by using openssl as follows:
To verify if TLSv1.0 is disabled, run the following command:
echo|openssl s_client -servername -connect :443 -tls1 2>&1 | grep -c 'ssl handshake failure'
To verify if TLSv1.1 is disabled, run the following command:
echo|openssl s_client -servername -connect :443 -tls1_1 2>&1 | grep -c 'ssl handshake failure'
A return integer greater than 0 means that TLSv1.0 or TLSv1.1 is disabled
Verifying via OpenSSL: TLSv1.2 enabled
echo|openssl s_client -servername -connect :443 -tls1_2 2>&1 | grep -c 'ssl handshake failure'
A return integer of 0 means that TLSv1.2 is enabled
Determine which TLS versions and ciphers are enabled via Nmap
You can determine which TLS versions and ciphers are enabled for each hostname using the following command:
nmap -sV --script ssl-enum-ciphers -p 443
Another tool is at https://github.com/drwetter/testssl.sh.
I have a service that's currently fronted by AWS' API Gateway. API Gateway does not offer static ("elastic", in aws parlance) IPs.
A client requires the ability to hit the API while using an IP allowlist, so i've been attempting to configure a (dockerized) nginx proxy on an instance with an elastic IP. I'm able to get a response from API Gateway via the proxy, but it's complaining about SSL.
From a browser, addressing the instance's IP: "ERR_SSL_VERSION_OR_CIPHER_MISMATCH"
From a shell on the instance itself:
# curl -v https://localhost:443
* Trying 127.0.0.1:443...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS alert, unknown CA (560):
* SSL certificate problem: unable to get local issuer certificate
* Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
My nginx.conf is very basic:
events {}
stream {
upstream test_api {
server my.domain.placeholder.com:443;
}
server {
listen 443;
proxy_pass test_api;
}
}
I've been using the docs at https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/# but have not had any success.
Can anyone offer any insight on using nginx streams for this use case? I'd prefer not to terminate SSL on nginx itself and just use it as a TLS pass-through if possible.
Update/resolution:
To test this problem, I configured haproxy as a passthrough proxy instead of nginx, and had the exact same result. Apache/httpd too! So, I started to suspect my testing methods and found them to be the source of the failure. See https://stackoverflow.com/a/46355026/3620843
TL;DR: my curl invocation was failing because I was asking the server for "localhost", which of course did not resolve to the backend server. Based on that, it stands to reason that requesting the frontend server's IP in a browser would react similarly.
What's needed is curl's --resolve option.
This works:
curl -v --resolve example.com:443:127.0.0.1 https://example.com
I have a self-signed site.dev certificate with CN=site.dev and ext3 DNS domain including site.dev, as a secret in k8s in the default ns, with type: kubernetes.io/tls and keys: tls.crt and tls.key. Since it's self-signed it does not contain intermediate certs (it can't).
Traefik is running with args:
- --configfile=/config/traefik.toml
- --defaultentrypoints=https,http
- --entrypoints=Name:https Address::443 TLS
- --entrypoints=Name:http Address::80
And when the ingress starts, Traefik logs:
{"level":"error","msg":"Error configuring TLS for ingress default/site-dev: secret default/site-dev-tls does not exist","time":"2019-04-20T21:09:02Z"}
The ingress has
tls:
- secretName: site-dev-tls`
And this is the output of curl:
curl https://site.dev:443/ -v
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to site.dev (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
* stopped the pause stream!
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
$ curl http://site.dev:443/
404 page not found
$ kubectl auth can-i get secrets/site-dev-tls --namespace default --as system:serviceaccount:kube-system:traefik-ingress-controller
yes
I'm not sure what I'm doing wrong... Any help appreciated.
404 means, that there is no service or deployment behind the ingress.
The ingress works perfectly fine, according to the curl output
Just deploy something and point the ingress to it. That should solve the issue
Environment
I have set up Proxy Protocol support on an AWS classic load balancer as shown here which redirects traffic to backend nginx (configured with ModSecurity) instances.
Everything works great and I can hit my websites from the open internet.
Now, since my nginx configuration is done in AWS User Data, I want to do some checks before the instance starts serving traffic which is achievable through AWS Lifecycle hooks.
Problem
Before enabling proxy protocol I used to check whether my nginx instance is healthy, and ModSecurity is working by checking a 403 response from this command
$ curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
After enabling Proxy Protocol, I can't do this anymore as the command fails with below error which is expected as per this link.
# curl -k https://localhost -v
* About to connect() to localhost port 443 (#0)
* Trying ::1...
* Connected to localhost (::1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* NSS error -5938 (PR_END_OF_FILE_ERROR)
* Encountered end of file
* Closing connection 0
curl: (35) Encountered end of file
# cat /var/logs/nginx/error.log
2017/10/26 07:53:08 [error] 45#45: *5348 broken header: "���4"�U�8ۭ�u��%d�z��mRN�[e��<�,�
�+̩� �0��/̨��98k�̪32g�5=�/<
" while reading PROXY protocol, client: 172.17.0.1, server: 0.0.0.0:443
What other options do I have to programmatically check nginx apart from curl? Maybe something in some other language?
You can use the --haproxy-protocol curl option, which adds the extra proxy protocol info to the request.
curl --haproxy-protocol localhost
So:
curl -ks "https://localhost/foo?username=1'%20or%20'1'%20=%20'"
Proxy Protocol append a plain text line before the streaming anything
PROXY TCP4 127.0.0.1 127.0.0.1 0 8080
Above is an example, but this happens the very first thing. Now if I have a NGINX listening on SSL and http both using proxy_protocol then it expects to see this line first and then any other thing
So if do
$ curl localhost:81
curl: (52) Empty reply from server
And in nginx logs
web_1 | 2017/10/27 06:35:15 [error] 5#5: *2 broken header: "GET / HTTP/1.1
If I do
$ printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 80\r\nGET /test/abc\r\n\r\n" | nc localhost 81
You can reach API /test/abc and args_given = ,
It works. As I am able to send the proxy protocol it accepts
Now in case of SSL if I use below
printf "PROXY TCP4 127.0.0.1 127.0.0.1 0 8080\r\nGET /test/abc\r\n\r\n" | openssl s_client -connect localhost:8080
It would still error out
web_1 | 2017/10/27 06:37:27 [error] 5#5: *1 broken header: ",(�� #_5���_'���/��ߗ
That is because the client is trying to do Handshake first instead of sending proxy protocol first then handshake
So you possible solutions are
Terminate SSL on LB and then handle http on nginx with proxy_protocol and use the the nc command option I posted
Add a listen 127.0.0.1:<randomlargeport> and execute your test using the same. This is still safe as you are listening to localhost only
Add another SSL port and use listen 127.0.0.1:443 ssl and listen <private_ipv4>:443 ssl proxy_protocol
All solutions are in priority order as per my choice, you can make your own choice
Thanks Tarun for the detailed explanation. I discussed within the team and ended up doing creating another nginx virtual host on port 80 and using that to check ModSecurity as below.
curl "http://localhost/foo?username=1'%20or%20'1'%20=%20'"`
Unfortunately bash version didn't work in my case, so I wrote python3 code:
#!/usr/bin/env python3
import socket
import sys
def check_status(host, port):
'''Check app status, return True if ok'''
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(3)
s.connect((host, port))
s.sendall(b'GET /status HTTP/1.1\r\nHost: api.example.com\r\nUser-Agent: curl7.0\r\nAccept: */*\r\n\r\n')
data = s.recv(1024)
if data.decode().endswith('OK'):
return True
else:
return False
try:
status = check_status('127.0.0.1', 80)
except:
status = False
if status:
sys.exit(0)
else:
sys.exit(1)
Server info
I have a server with nginx 1.12.1 enabled sni and resin 3.1.6 on jdk 1.6.0
The nginx here use 80 port to proxy 8080 and 443 port to proxy 8443 of resin.
First issue
When the nginx is running, I can get access with 80. But access to 443, I will get a 502 error, and in the error log, I got dh key too small.
Second issue
Then I compile the nginx with openssl-0.9.8f,and sni is disabled.This time I got everything to be done.But when I compile the nginx with openssl-0.9.8f and enable the sni.I got a 502 again,and error log is SSL: error:1
40773F2:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert unexpected message
What I confused
So,the first issue about dh key too small is belong to Weak Diffie-Hellman ? The reason is my jdk is too old?
And the second issue is because nginx send sni information to resin,but resin doesn't support sni,so can't do ssl handshake?