Jar signing common name - jar

I'm looking to sign a jar that will be launched via jnlp from different servers behind a firewall on an internal network. The jar file needs to be signed with a trusted certificate so as to avoid the security warning.
I've set-up keystores and SSL certificates in the past, but only to be used for web applications. Typically the Common Name used when setting up the key-pair should be the domain name pointing to the web application (e.g. mysite.example.com).
How does this change when signing a jar that will be served via jnlp from different servers that typically do not have domain names assigned to them. Is the Common Name as important here? Can we set-up and sign the jar using a single trusted certificate with one Common Name, to be used for all servers?
Thanks!

Normally we sign something with private key and we verify by public key. Then it is a little bit different than trust a web (SSL) server.
The procedure:
create (get) key pair certificate
import on keystore
sign the jar
http://docs.oracle.com/javase/7/docs/technotes/tools/windows/jarsigner.html

Related

Sharing a Private Key with WordPress Host

We utilize an outside vendor for hosting our WordPress site. The SSL certificate will expire soon and they have requested that I send them the contents of the PFX file, unencrypted, to them via email. The PFX file contains the KEY file and the CRT file. Our SSL is a wildcard for our domain; the same key is used to protect our VPN and another web server which I manage. We do not use it to sign any code.
If I have to share this/these files, I'd much prefer to do it by way of OneDrive or Google drive, but the host service person says that emailing presents no risks since an attacker would need to get into our DNS to make use of it.
Am I justified in pushing back on this? I find it weird that they haven't even offered to send it encrypted and provide the passcode via another mechanism.
TIA

Risks of developer signing credentials

According to the Identityserver4 documentation the AddDeveloperSigningCredential is, as the name implies, for development purposes only. I have found several articles that describe the process of switching to other signing certificates that are more geared towards production environments. My question is why is the switch necessary? Are the generated developer signing credentials insecure in some way, and if so what is the attack vector that can be exploited? What if I am running my identity server through an nginx reverse proxy and using an aws cloud signing credential through a load balancer? Does that adequately resolve the exposed attack vector? In short what risks are there to using developer signing credentials in production?
The generated RSA key (2k) is from a security point of view totally fine - you won't get "bad crypto" compared to a "production key".
The main problem with the dev signing key is that it is loaded from the local hard-disk - and especially (by default) from the application directory.
You should have a more secure storage location for keys (e.g. the Windows certificate store or some storage service) so that in case an attacker can read files from the app server hard disk cannot easily recover the private key.

Storing Azure Vault Client ID and Client Secret

I am using .NET Core 2.0 and ASP.NET Core 2.0 for application development. The "test" application is a .NET Core Console application. The core code I am writing is a class library. Once proper testing. I choose to do this since I won't be putting this to use for awhile (it's replacing older ASPNET code).
Anyway, since I have to work with a LOT of API keys for various services I decided to use Microsoft Azure Key Vault for storing the keys. I have this all setup and understand how this works. The test application uses a test Azure account so it's not critical. And since this is replacing legacy code and it's in the infancy, I am the sole developer.
Basically, I'm running into this issue. There's not too much information on Azure Key Vault from what I can see. A lot of examples are storing the Client ID and Secret in a plain text json file (for example: https://www.humankode.com/asp-net-core/how-to-store-secrets-in-azure-key-vault-using-net-core). I really don't understand how this can be secure. If someone were to get those keys they could easily access stored information Azure, right?
The Microsoft MSDN has a powershell command that grants access (I lost the original link, this is closest I can find: https://www.red-gate.com/simple-talk/cloud/platform-as-a-service/setting-up-and-configuring-an-azure-key-vault/) My development operating system is Windows 10 and my primary server operating system is Debian.
How would I approach this?
Yes, you are right, the plain text config file could be used only during development, not for production purpose. And in general, available options depend on where and how you host an App.
If you have an Azure Web App, you have at least next built-in options (from the documentation):
add the ClientId and ClientSecret values for the AppSettings in the Azure portal. By doing this, the actual values will not be in the web.config but protected via the Portal where you have separate access control capabilities. These values will be substituted for the values that you entered in your web.config. Make sure that the names are the same.
authenticate an Azure AD application is by using a Client ID and a Certificate instead of a Client ID and Client Secret. Following are the steps to use a Certificate in an Azure Web App:
Get or Create a Certificate
Associate the Certificate with an Azure AD application
Add code to your Web App to use the Certificate
Add a Certificate to your Web App
You may also find an approach that uses env variables to store credentials. This may be OK only if you can guarantee that it's not possible to do a snapshot of env variable on prod machine. Look into Environment Variables Considered Harmful for Your Secrets for more details.
And the last one thing: there is also a technic that based on the idea, that you need to store/pass only a ClientSecret value while ClientId should be constructed based on machine/container details where the App is hosted (e.g. docker container id). I have found an example for Hashicorp Vault and an App hosted on AWS, but the general idea is the same: Secret management with Vault
In addition to the first answer, with the context of running applications on Azure VM, instead of using client_secret to authenticate, you can use client certificate authentication as explained in this documentation: Authenticate with a Certificate instead of a Client Secret.
In the picture above:
Application is authenticating to AAD by proving that it has the private key of the certificate (which is basically stored in CNG if you are using Windows).
Application get back the access_token and then use it to access the Key Vault.
The developer does not need to know the private key value of the certificate in order for their app to be successfully authenticated. Instead, they only need to know the location of the imported pfx (a container for private key and its certificate) in the Certificate Store.
At least on Windows, you as secret administrator can convert the private key and the certificate into pfx format which is password protected, and then deploy it into the Windows Certificate store. This way no one could know the private key unless they know the password of the pfx file.
The other approach specifics for Azure Compute, is to use Azure Managed Service Identity. Using Azure MSI, Azure will automatically assign your resources such as VM with an identity / Service Principal, and you can fire requests at a specific endpoint that are only accessible by your resource to get the access_token. But be wary that Azure MSI are still under public preview, so please review the known issues before using it.
The picture above explain how Azure Resource Manager assign a Service Principal identity to your VM.
When you enable MSI in a VM, Azure will create a service principal in your AAD.
Azure will then deploy a new MSI VM extension to your VM. This provides an endpoint at http://localhost:50432/oauth2/token to be used to get the access_token for the service principal.
You can then use the access_token to access the resources such as Key Vault which authorize the service principal access.

Verifying Client-Signed X509 Request in Web API without installing to Store

We have Web API 2 application exposed to outside vendors for various integrations. We're adding a new one with DocuSign through their Connect service and they will be signing their requests with their X509 certificate. I would rather not install the certificate on the server itself because we add new servers and deployments often based on load.
Here is my plan, and I'd like to know what the security risks are with it (assuming it will work at all).
DocuSign provides their X509 certificate for download. I want to place that *.cer file in my Web API application's ~/App_Data folder, along with any other certs from any other vendors. I will use a DelegatingHandler to grab the client certificate from the Request. I would then use the X509Chain class as described here to load all certificates from the ~/App_Data folder and to verify the request certificate.
From there I would map the certificate subject to a role and add that to the current thread to provide authentication for specific routes.
I've gathered from my research that this method would be less secure than installing DocuSign's certificate to the server's root store - is that correct? And how much less secure?
At the end of the day I'd like to (1) verify that the request is coming from who it says it's coming from, and (2) add roles based on the verified requester for authentication.

WCF, Certificate Authentication - Common Errors and Confusing Arguments

I am trying to setup a WCF service to use a Certificate for Authenticating the client. I have read tons of posts on how to create the certificate, and I have been able to do so (finally).
I am installing the Cert Authority and the Cert on a server that runs Windows 2008 R2. When I open the MMC Certificates Snap-in, I choose Computer Account. Is this correct? I am doing this because my WCF service will run in a Windows Service, and will be running even when no user's are logged in. But admittedly, I don't know what the difference is between the three options:
My user account
Service account
Computer account
Once the snap-in loads, I import the Authority Cert into Trusted Root Certification Authorities. Then, I import the cert into Trusted Publishers. I don't encounter any errors when doing this. When I do the import, of both the Authority Cert and the Cert signed by that authority, I don't make any reference to the .pvk file. It is my understanding that the private key is embedded in either the cert or the authority cert. Here are the commands I use to create each cert:
MakeCert.exe
-n “CN=InternalCA”
-r
-sv InternalCA.pvk InternalCA.cer
-cy authority
MakeCert.exe
-sk InternalWebService
-iv InternalCA.pvk
-n “CN=InternalWebService”
-ic InternalCA.cer InternalWebService.cer
-sr localmachine
-ss root
-sky exchange
-pe
Notice I used -ss root. I have seen many posts using -ss My. I don't really understand what the difference is or when it is appropriate to use each value.
My WCF service runs on this machine inside a Managed Service (Windows Service). When I start my windows service, which hosts the WCF service, it crashes immediately and a seemingly common error is reported in the event viewer:
System.ArgumentException: It is likely
that certificate 'CN=TempCertName' may
not have a private key that is capable
of key exchange or the process may not
have access rights for the private key
I have found posts that say I need to grant permissions to the user running the service to the key.
This one seems to be a popular answer here on stackoverflow: Grant access with All Tasks/Manage Private Keys
But I don't have the option of All Tasks/Manage Private Keys
But I'm not clear on how to do that. But also, the service is running under my domain account, which is an administrator and is also the same user that installed the cert.
Please Help :)
Here's the best link that should help you get your self-hosted SSL WCF service to work with your own custom CA/certificates: SSL with Self-hosted WCF Service.
After you get it working from the guide above, you may want setup your service programatically to use the right certificate during installation-time.
I find that verifying my HTTP.SYS configuration with the HTTPCfgUI tool to be easier than via the command line httpcfg/netsh commands.
Next, if you still encounter errors, you can debug further using WCF Tracing. Additionally, you should turn on WCF Message Tracing as well. You can trace the .NET network stack too, if the WCF tracing doesn't provide enough information.
You can test if your certificate/CA pair on the service is working by hitting your service URL in a browser on another machine. It should first state that the certificate is invalid. Then, import the CA on the machine into the trusted root, and hit the service URL again. This time it should display the service description page as usual, with no warnings.
I think you are close. Here are some suggestions:
Make sure you are getting a private key attached to your certificate in the second step. You have to run the command in an elevated process -- even if you have Admin privileges, you have to, for example, right-click and "Run as Administrator" to start the command shell you use for this command. Otherwise, you won't get the private key into the localMachine store.
I would use -ss my and put the certificate (with private key) into the Personal store. Here I see this:
cc.ClientCredentials.ClientCertificate.SetCertificate(
StoreLocation.CurrentUser,
StoreName.My,
X509FindType.FindBySubjectName,
"contoso.com");
and so wherever you are pointing to in your equivalent, put the certificate there.
You don't need to import the private key of the CA cert (the first one you created). That's only kept around to sign more certificates with MakeCert. You will need a copy of that CA certificate (without the private key!) on the client that is connecting in, otherwise the client won't be able to validate the InternalWebService certificate.
You don't really need the CA certificate locally on the server machine, because it will only be needed by the client. But it doesn't hurt, and would be needed if anything on the server connects to the service locally. Also, it makes the InternalWebService certificate look good on the MMC snap-in. You can try with and without the CA in the Trusted Root store and you'll see what I mean. But in any case, I would not put the private key of the CA into the local computer store.
Check the private key permissions for InternalWebService from the MMC snap-in (right-click on the certificate, then Tasks, Manage Private Keys...) If you import under a different user account than the service runs under, then certainly it won't have access yet, and you have to go give it access. Otherwise the service will get the certificate, but it will appear that the certificate does not have a private key.
To summarize:
Run with Admin privileges to make sure the private key of InternalWebService is really getting into the certificate store. (You'll see a little key on the certificate in the MMC snap-in, and right-clicking on the certificate will have an option "Manage Private Keys..." which is not present if there is no private key attached.)
Put the InternalWebService in the place where you Web service is looking for it. I would guess "Personal" (a.k.a. "my") but you know where that is in your service config. Whether current user or local machine, look in your config.
Give access to the InternalWebService certificate private key.
Put the CA certificate -- without its private key -- under Trusted Root, and you'll need to put that on the client as well (or otherwise have the client accept an "untrusted certificate" at its end.)

Resources