Mutal SSL authentication with Qt - qt

I am new to ssl / networking and want to utilize mutal ssl ( client verifies server and server verifies peer) I found a white paper (http://www.infidigm.net/articles/qsslsocket_for_ssl_beginners/) online that gave me some guidance for setting up my certs and keys. Now this paper utilizes a local host ip address as the clients cert file. I want to switch this to a register domain name (scp.radiant.io). This FQDN is local to my ubuntu os for testing purposes
updated my localhost to have a domianname (scp.radiant.io). by modifying this file sudo nano /etc/hosts/ to say 127.0.0.1 scp.radiant.io localhost
Next I create certificate and private keys for both client and server
a. Steps for gen certs example for server below. same commands are run for client to create client certs
openssl req -out server_ca.pem -new -x509 -nodes -subj "/C=$COUNTRY/ST=$STATE/L=$LOCALITY/O=$ORG/OU=$ORG_UNIT/CN=server/emailAddress=radiant.$EMAIL"
mv privkey.pem server_privatekey.pem
touch server_index.txt
echo "00" >> server_index.txt
openssl genrsa -out server_local.key 1024
openssl req -key ${NAME}_local.key -new -out server_local.req -subj "/C=$COUNTRY/ST=$STATE/L=$LOCALITY/O=$ORG/OU=$ORG_UNIT/CN=scp.radiant.io/emailAddress=$EMAIL"
openssl x509 -req -in ${NAME}_local.req -CA ${NAME}_ca.pem -CAkey server_privatekey.pem -CAserial server_index.txt -out server_local.pem
b. this generates a CaCerts (server_ca.pem and client_ca.pem)
c. this generates a Local Cert files (server_local.pem and client_local.pem).. THIS IS WHERE I SET FQDN to scp.radiant.io
d. this generate a LocalKey (server_local.key and client_local.key)
I use the generated cert files for setting up the ssl configuration on the QSslSocket for both sides like so
//client socket setup
config.setPrivateKey("server_local.key");
config.setLocalCertificate("server_local.pem");
config.addCaCertificate("client_ca.pem");
config.setPeerVerifyMode("QSslSocket::VerifyPeer");
sslSocket->setSslConfiguration(config);
sslSocket->connectToHostEncrypted("scp.radiant.io",1200);
// server socket setup
config.setPrivateKey("client_local.key");
config.setLocalCertificate("client_local.pem");
config.addCaCertificate("server_ca.pem");
config.setPeerVerifyMode("QSslSocket::VerifyPeer");
sslSocket->setSslConfiguration(config);
sslSocket->startServerEncryption()
When running this code i get the following error in my ssl errors. "The host name did not match any of the valid hosts for this certificate
Now if I change the client socket to use this when connecting sslSocket->connectToHostEncrypted("scp.radiant.io",1200,"scp.radiant.io"); it will work.
I dont understand why I have to set the peerVerifyHost argument when connecting encrypted. I would like use the same certificates for my WebSockets implementation for this as well but the QWebSocket class does not allow you to set the peerverifyHost when connecting. So I must be doing something wrong at the cert level or the os level for my FQDN. any networking and ssl help would be helpful

I think you can ignore this error using "ignoreSslErrors" and let the handshake continue

Related

Why Azure Gateway can't properly bind root certificate with nginx ingress controller?

I'm trying to create a solution based on Azure AKS Baseline.
I have and AKS with Nginx Ingress Controller and Azure Gateway V2.
I need to make a conversation between Azure Gateway V2 and Nginx Ingress controller secured, by using certificates that were generated by Azure Key Vault.
In the backend health probe I have this error:
The root certificate of the server certificate used by the backend does not match the trusted root certificate added to the application gateway. Ensure that you add the correct root certificate to whitelist the backend
3 certificates were added to the KeyVault: root, intermediate and vl.aks-ingress.mydomain.com
intermediate certificate's CSR was signed by root private key and merged to key vault.
domain's CSR was signed by intermediate private key and merged to key vault.
That's how I signed intermediate and domain certificates:
$signerCertSecret = Get-AzKeyVaultSecret -VaultName $KeyVaultName -Name $SignerCertificateName
$signerCertsecretByte = [Convert]::FromBase64String(($signerCertSecret.SecretValue | ConvertFrom-SecureString -AsPlainText))
$signerCertPfxFilePath = New-TemporaryFile
[System.IO.File]::WriteAllBytes($signerCertPfxFilePath, $signerCertsecretByte)
$policy = New-AzKeyVaultCertificatePolicy -SecretContentType "application/x-pkcs12" `
-SubjectName "CN=$Subject" `
-IssuerName "Unknown" `
-ValidityInMonths 60 `
-ReuseKeyOnRenewal
$_ = Add-AzKeyVaultCertificate -VaultName $KeyVaultName -Name $CertificateName -CertificatePolicy $policy
$csrTempFile = New-TemporaryFile
$certCsr = '-----BEGIN CERTIFICATE REQUEST-----' + `
[Environment]::NewLine + `
(Get-AzKeyVaultCertificateOperation -VaultName $keyVaultName -Name $CertificateName).CertificateSigningRequest + `
[Environment]::NewLine + `
'-----END CERTIFICATE REQUEST-----'
[System.IO.File]::WriteAllText($csrTempFile, $certCsr)
$signerKeyFile = New-TemporaryFile
$signerCertFile = New-TemporaryFile
$pass = "pass123"
openssl pkcs12 -in $signerCertPfxFilePath -nocerts -out $signerKeyFile -passin pass: -passout pass:$pass
openssl pkcs12 -in $signerCertPfxFilePath -clcerts -nokeys -out $signerCertFile -passin pass:
$signedNewCert = New-TemporaryFile
openssl x509 -req -in $csrTempFile -days 3650 -CA $signerCertFile -CAkey $signerKeyFile -CAcreateserial -out $signedNewCert -passin pass:$pass
az keyvault certificate pending merge --vault-name $KeyVaultName --name $CertificateName --file $signedNewCert
After that, I've imported everything to my Windows machine and export the full chain (I didn't find any way to do it automatically via Key Vault). This full chain certificates I've added to the keyvault as a secret. Then this secret was added as secret to the AKS. To test that everything is ok with nginx ingress, I've added a Windows VM to the same network. Inside AKS I've added some super small server and requested it from browser in my VM. Browser cried that certificate is unsafe, but I got the full chain:
Then I've downloaded ROOT CA cer from the keyvault and added it to the gateway.
My understanding is that everything should work properly after that.
But I'm still getting a "root certificate wrong error".
I will appreciate any help or advice, cause I've already waste a week for that and don't have any significant progress.
Thanks in advance!

Trust Self-signed SSL/TLS Certificate for Secure Inter-service Communication

We have an orchestration of microservices running on a server. An nginx service is acting as a proxy between microservices. We would like to have all the communications on SSL with our self-signed certificates.
We want to add our private CA to every service (running on Debian Buster), so that it is considered valid everywhere within that service. We generate our server certificate and CA as follows:
# Generate Root CA Certificate
openssl genrsa -des3 -out CA-key.pem 2048
openssl req -new -key CA-key.pem -x509 -days 1000 -out CA-cert.pem
# Generate a Signing a Server Certificate
openssl genrsa -des3 -out server-key.pem 2048
openssl req –new –config openssl.cnf –key server-key.pem –out signingReq.csr
openssl x509 -req -days 365 -in signingReq.csr -CA CA-cert.pem -CAkey CA-key.pem -CAcreateserial -out server-cert.pem
However, we can't make the microservices to consider the certificate as valid and trust it. When a get request is issued using the request library of Python in the micro-service, the following exception is thrown:
requests.exceptions.SSLError: HTTPSConnectionPool(host='server.name', port=443): Max
retries exceeded with url: /url/to/microservice2/routed/via/nginx/ (Caused by
SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate
verify failed: self signed certificate (_ssl.c:1076)')))
What we have tried so far:
Copying the certificate to /usr/share/ca-certificate/ and running the sudo dpkg-reconfigure ca-certificates and/or update-ca-certificates commands.
Set the REQUESTS_CA_BUNDLE env variable to /path/to/internal-CA-cert.pem
Set the SSL_CERT_FILE env variable to /path/to/internal-CA-cert.pem
The only workaround that works is setting the valid=False in requests.get(url, params=params, verify=False, **kwargs), to ignore the validity of the SSL certificate is ignored. But, this is not the worfklow we would want to implement for all the microservices and communications.
The solution was to copy the self-signed server certificate (signed with our own CA) to the /usr/local/share/ca-certificates directory and use the update-ca-certificates which is shipped in debian distributions (similar solution is available for other linux distributions).
cp /path/to/certificate/mycert.crt /usr/local/share/ca-certificates/mycert.crt
update-ca-certificates
However, the tricky part is that the above solution is not sufficient for the python request library to consider the certificate as valid. To resolve that, one has to append the self-signed server certificate to the cat-certificates.crt and then set the environment variable REQUESTS_CA_BUNDLE to that appended file.
cat /path/to/certificate/mycert.crt >>/etc/ssl/certs/ca-certificates.crt
export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt

How to run 'dotnet dev-certs https --trust'?

I'm new in ASP.NET.
Environment:
Ubuntu 18.04
Visual Studio Code
.NET SDK 2.2.105
I'm in trouble with some command running.
I was reading tutorial at
https://learn.microsoft.com/ja-jp/aspnet/core/tutorials/razor-pages/razor-pages-start?view=aspnetcore-2.2&tabs=visual-studio-code
and ran this command:
dotnet dev-certs https --trust
I expect https://localhost should be trusted.
but I found the error message;
$ Specify --help for a list of available options and commands.
It seems that the command "dotnet dev-certs https" has no --trust options.
How to resolve this problem?
On Ubuntu the standard mechanism would be:
dotnet dev-certs https -v to generate a self-signed cert
convert the generated cert in ~/.dotnet/corefx/cryptography/x509stores/my from pfx to pem using openssl pkcs12 -in <certname>.pfx -nokeys -out localhost.crt -nodes
copy localhost.crt to /usr/local/share/ca-certificates
trust the certificate using sudo update-ca-certificates
verify if the cert is copied to /etc/ssl/certs/localhost.pem (extension changes)
verify if it's trusted using openssl verify localhost.crt
Unfortunately this does not work:
dotnet dev-certs https generates certificates that are affected by the issue described on https://github.com/openssl/openssl/issues/1418 and https://github.com/dotnet/aspnetcore/issues/7246:
$ openssl verify localhost.crt
CN = localhost
error 20 at 0 depth lookup: unable to get local issuer certificate
error localhost.crt: verification failed
due to that it's impossible to have a dotnet client trust the certificate
Workaround: (tested on Openssl 1.1.1c)
manually generate self-signed cert
trust this cert
force your application to use this cert
In detail:
manually generate self-signed cert:
create localhost.conf file with the following content:
[req]
default_bits = 2048
default_keyfile = localhost.key
distinguished_name = req_distinguished_name
req_extensions = req_ext
x509_extensions = v3_ca
[req_distinguished_name]
commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = localhost
commonName_max = 64
[req_ext]
subjectAltName = #alt_names
[v3_ca]
subjectAltName = #alt_names
basicConstraints = critical, CA:false
keyUsage = keyCertSign, cRLSign, digitalSignature,keyEncipherment
[alt_names]
DNS.1 = localhost
DNS.2 = 127.0.0.1
generate cert using openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout localhost.key -out localhost.crt -config localhost.conf
convert cert to pfx using openssl pkcs12 -export -out localhost.pfx -inkey localhost.key -in localhost.crt
(optionally) verify cert using openssl verify -CAfile localhost.crt localhost.crt which should yield localhost.crt: OK
as it's not trusted yet using openssl verify localhost.crt should fail with
CN = localhost
error 18 at 0 depth lookup: self signed certificate
error localhost.crt: verification failed
trust this cert:
copy localhost.crt to /usr/local/share/ca-certificates
trust the certificate using sudo update-ca-certificates
verify if the cert is copied to /etc/ssl/certs/localhost.pem (extension changes)
verifying the cert without the CAfile option should work now
$ openssl verify localhost.crt
localhost.crt: OK
force your application to use this cert
update your appsettings.json with the following settings:
"Kestrel": {
"Certificates": {
"Default": {
"Path": "localhost.pfx",
"Password": ""
}
}
}
While the answer provided by #chrsvdb is helpful it does not solve all problems. I still had issue with service-to-service communication (HttpClient - PartialChain error) and also you must reconfigure Kestrel to use your own certificate. It is possible to create a self-signed certificate and import it to the .NET SDK. All you need is to specify the 1.3.6.1.4.1.311.84.1.1 extension in the certificate.
After that the cert can be imported into .NET Core SDK and trusted. Trusting in Linux is a bit hard as each application can have it's own certificate store. E.g. Chromium and Edge use nssdb which can be configured with certutil as described John Duffy. Unfortunately the location to the nssdb maybe different when you install application as snap. Then each application has its own database. E.g. for Chromium Snap the path will be $HOME/snap/chromium/current/.pki/nssdb, for Postman Snap the will be $HOME/snap/postman/current/.pki/nssdb and so on.
Therefor I have created a script which generates the cert, trusts it for Postman Snap, Chmromium Snap, current user nssdb and on system level. It also imports the script into the .NET SDK so it will be used by ASP.NET Core without changing the configuration. You can find more informations about the script in my blog post https://blog.wille-zone.de/post/aspnetcore-devcert-for-ubuntu
In adition to crisvdb answer, I've several information to add and is the continuation of the walktrough. I don't comment because is pretty complex comment this, but before this answer take a look to crisvdb answer first and then return to continue.
Take the "in detail" crisdb answer.
You can make your cert in any folder, can be or can't be in the same folder of the app.
Take openssl verify -CAfile localhost.crt localhost.crt as not optional step, mandatory. It will help.
Do not recompile or touch the code meanwhile you are doing this, in order to get first scenario clean.
If you run sudo update-ca-certificates that will answer you in wich folder the certified should be copied.
In some distributions, as Raspbian for Raspberry Pi, CA certificates are located in /etc/ssl/certs as well as /usr/share/ca-certificates/ and in some cases /usr/local/share/certificates.
Do not copy the cert manually to trusted certs, run sudo update-ca-certificates after you copy the cert in the right folder. If it doesn't work (doesn't update or add any certificate) copy it to every folder possible.
If you use a password while making the certificate, you should use it in the appsettings.json
If you get this error:
Interop+Crypto+OpenSslCryptographicException: error:2006D002:BIO
routines:BIO_new_file:system lib
Take in consideration that error means "access denied". It can be because you don't have permissions or related.
7b) Could be also that the file is not found, I use the entire path in the config:
"Path": "/home/user/www/myfolder1/myapp/localhost.pfx",
After that, and if everything works, you could see a 500 error if you are using Apache or Apache2.
If you get the following error in the apache logs of the site:
[ssl:error] [remote ::1:yourport] AH01961: SSL Proxy requested for
yoursite.com:443 but not enabled [Hint: SSLProxyEngine] [proxy:error]
AH00961: HTTPS: failed to enable ssl support for [::1]:yourport
(localhost)
you must set in the VirtualHost the following configuration after SSLEngine On and before your ProxyPass
SSLProxyEngine on
After that, and if everything works, you could see a 500 error if you are using Apache or Apache2.
If you get the following error in the apache logs of the site:
[proxy:error] [client x.x.x.x:port] AH00898: Error during SSL
Handshake with remote server returned by /
[proxy_http:error] [client x.x.x.x:port] AH01097: pass request body failed to [::1]:port
(localhost) from x.x.x.x()
you must set in the VirtualHost the following configuration after SSLProxyEngine on and before your ProxyPass
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
UPDATE
If you are renovating this, and using the same names, take in consideration that you should remove your pem file from etc/ssl/certs
UPDATE 2
If it returns:
Unhandled exception. Interop+Crypto+OpenSslCryptographicException: error:2006D002:BIO routines:BIO_new_file:system lib
Check that your pfx file is on 755 permissions.
If appsettings.json seems to be don't load (on port 5000 by default or SQL or any configuration doesn't load or can't be read), take in consideration that the dotnet must be executed on the same directory where is appsettings.json
Looks like this is a known issue with dotnet global tools and that specific command is only available for MacOS and Windows. See this issue on github: Issue 6066.
It seems like there may be a work around for Linux users based on this SO post: ASP.Net Core application service only listening to Port 5000 on Ubuntu.
For Chrome:
Click "Not Secure" in address bar.
Click Certificate.
Click Details.
Click Export.
Run: certutil -d sql:$HOME/.pki/nssdb -A -t "P,," -n {FILE_NAME} -i {FILE_NAME}
Restart Chrome.
It looks like the following could help to trust the dotnet dev certs:
https://blog.wille-zone.de/post/aspnetcore-devcert-for-ubuntu/
Then you will see also in the browser that certificate is OK and valid for the next yeat.
Give it a try...
Good luck!

Decrypt OpenSSL binary through NGINX as it is received ( on the fly )

I have a small embedded Linux device that has 128 MB flash storage available to work with as a scratchpad. This device runs an NGINX web server. In order to do a firmware update - the system receives an encrypted binary file as an HTTPS POST through NGINX to the scratchpad. The system then decrypts the file and flashes a different QSPI flash device in order to complete the update.
The firmware binary is encrypted outside the device like this:
openssl smime -encrypt -binary -aes-256-cbc -in plainfile.zip -out encrypted.zip.enc -outform DER yourSslCertificate.pem
The firmware binary is decrypted, after being received through NGINX, on the device like this:
openssl smime -decrypt -binary -in encrypted.zip.enc -inform DER -out decrypted.zip -inkey private.key -passin pass:your_password
I'd really like to decrypt the binary as it is received ( on the fly ) through NGINX, so that it appears on the flash scratchpad in it's decrypted form.
I've been unable to find any existing NGINX modules on Google that would do this. How might I accomplish this? Thanks.
First of all, you need to understand one thing. While nginx will decrypt file - all other request will be blocked. That's why nginx does not support CGI, only FastCGI.
If it ok for you (for example, nginx used only for update purposes), you can use perl or lua extension: http://nginx.org/en/docs/http/ngx_http_perl_module.html, https://github.com/openresty/lua-nginx-module
Using this modules you can exec shell. For access uploaded file need to set client_body_in_file_only directive - https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_in_file_only
Example for perl module (untested):
location /upload {
client_body_in_file_only clean;
perl 'sub {
my $r = shift;
if ($r->request_body_file) {
system("openssl smime -decrypt -binary -in ".$r->request_body_file." -inform DER -out /tmp/decrypted.zip -inkey private.key -passin pass:your_password");
}
}';
}
But much better to use fastcgi. You can use light fastcgi wraper for it, for example, fcgiwrap https://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/
There's a widely-known attack against non-event-based servers like Apache known as Slowloris, where the client initiates a number of HTTP requests without sending them in full (where each request is instead being sent in part for a prolonged interval, which is a very cheap operation, both to send and receive, for an event-based architecture like that of nginx, but at the same time is very expensive for some other servers).
As a proxy server, nginx protects its backends from such an attack. Indeed, protection may come at a cost in certain circumstances, and can be turned off with the …_request_buffering directives:
http://nginx.org/r/proxy_request_buffering
http://nginx.org/r/scgi_request_buffering
http://nginx.org/r/uwsgi_request_buffering
http://nginx.org/r/fastcgi_request_buffering
What you would do is disable the request buffering, and then pipe the incoming file directly to openssl as it is received.
Note that you can always use /dev/fd/0 in place of the filename to specify stdin (depending on the tool, using - in place of the filename may also be an option).

SEC_E_ALGORITHM_MISMATCH (0x80090331) - The client and server cannot communicate, because they d not possess a common algorithm

I am using a python client to connect to C++ server over https. However when a client tries to download some file from the server I get a error reported by the server as "The client and server cannot communicate, because they do not possess a common algorithm."
The client and the server are on the same machine. The server uses the following command to create a pem file:
openssl.exe" req -new -newkey rsa:1024 -days 9999 -nodes -x509 \
-keyout etc\\filestore.pem -out etc\\filestore.pem
What have I tried ?
I have imported the pem file into the certificates folder
I enabled SSL2,SSL3,TL1.0,TLS 1.1,TLS1.2 in the registry
There is a url that gets generated in the server. I pasted this url in the chrome browser. Where I got a warning like:
I met this problem today, too.
I have resolved it.
It's because your client does not support the SSL3.0 algorithms your server uses.
Just change your serverside code from:
SSLv3_server_method()
to:
SSLv23_server_method()
I did resolve the issue . SSLScan tool came to my rescue . SSLScan --no-failed port> gave me the list of supported ciphers that the server supported . On the client side I am using curl library calls to download the file . What I did was
setOpt(new curlpp::options::SslCipherList("AES256-SHA"));
which set the cipher that my server was supporting .

Resources