I need to fetch the certificate, and would like to fetch the client, and there is no server, I could do this form:
public static X509Certificate2 EscolherCertificado(string serial)
{
var store = new X509Store("MY", StoreLocation.CurrentUser);
var Key = new RSACryptoServiceProvider();
store.Open(OpenFlags.ReadOnly | OpenFlags.OpenExistingOnly);
X509Certificate2Collection collection = store.Certificates;
X509Certificate2Collection fcollection = collection.Find(X509FindType.FindBySerialNumber, serial, false);
if (fcollection.Count == 1)
{
return fcollection[0];
}
else { cod = "00000"; msgm = "not found"; return null; }
}
But when I publish on the server it does not work. Is there any way I can do this?
I can not get the client certificate, it returns error, because on the server there are no registered certificates.
EDIT
I have already been told that it is possible, I just do not know how to do it, the ways I tried does not work.
EDIT
Following this link, I did comply, but it does not work, it does not always find. What can I do to correct this problem?
public static X509Certificate2 EscolherCertificado(string serial)
{
X509Store userCaStore = new X509Store(StoreName.My, StoreLocation.CurrentUser);
try
{
userCaStore.Open(OpenFlags.ReadOnly);
X509Certificate2Collection certificatesInStore = userCaStore.Certificates;
X509Certificate2Collection findResult = certificatesInStore.Find(X509FindType.FindBySerialNumber, serial, true);
X509Certificate2 clientCertificate = null;
if (findResult.Count == 1)
{
clientCertificate = findResult[0];
}
else
{
throw new Exception("Unable to locate the correct client certificate.");
}
cod = "0000"; msgm = clientCertificate.ToString(); return clientCertificate;
}
catch
{
throw;
}
finally
{
userCaStore.Close();
}
As far as I know, you cannot reach the client Certificates Store.
To do that you have to code a propietary plugin for each browser platform and give the right access permissions to the client.
That's a very painful task.
I wish you good luck with it.
In my case we ended up developing an EXE which has full access to the client hardware and features of the platform.
Here's the code:
private void ElegirCert()
{
System.Security.Cryptography.X509Certificates.X509Store store = new System.Security.Cryptography.X509Certificates.X509Store("MY", System.Security.Cryptography.X509Certificates.StoreLocation.CurrentUser);
store.Open(System.Security.Cryptography.X509Certificates.OpenFlags.ReadOnly);
System.Security.Cryptography.X509Certificates.X509Certificate2Collection collection = (System.Security.Cryptography.X509Certificates.X509Certificate2Collection)store.Certificates;
System.Security.Cryptography.X509Certificates.X509Certificate2Collection fcollection = (System.Security.Cryptography.X509Certificates.X509Certificate2Collection)collection;//.Find(System.Security.Cryptography.X509Certificates.X509FindType.FindByTimeValid, DateTime.Now, false);
try
{
Cert = System.Security.Cryptography.X509Certificates.X509Certificate2UI.SelectFromCollection(fcollection, "Elegir", "Seleccione el certificado que desea utilizar", System.Security.Cryptography.X509Certificates.X509SelectionFlag.SingleSelection)[0];
}
catch (Exception e)
{
}
store.Close();
}
I still think that it is not possible to get the list of client certificates using the sole browser capabilities.
There is a way and it involves creating an extension which communicates with an Executable Native file that does the "hard work". That means, getting the full list of certificates and exposing it to the user.
I think that after that, the user chooses one, then the EXE asks for the cert store password (if it has one), then the exe digitally signs the hash and whatever...
Related
I am trying to implement the certificate authentication in .net core API(Server/target) and this API will be invoked in to another API(Client) .Here is the piece of code of client api which makes request to server/target api.But I'm facing an error on the server/target api .I'm running these two services from local and both certificates have already installed
Client side controller logic
[HttpGet]
public async Task<List<WeatherForecast>> Get()
{
List<WeatherForecast> weatherForecastList = new List<WeatherForecast>();
X509Certificate2 clientCert = Authentication.GetClientCertificate();
if (clientCert == null)
{
HttpActionContext actionContext = null;
actionContext.Response = new HttpResponseMessage(System.Net.HttpStatusCode.Forbidden)
{
ReasonPhrase = "Client Certificate Required"
};
}
HttpClientHandler requestHandler = new HttpClientHandler();
requestHandler.ClientCertificates.Add(clientCert);
requestHandler.ServerCertificateCustomValidationCallback += (sender, cert, chain, sslPolicyErrors) => true;
HttpClient client = new HttpClient(requestHandler)
{
BaseAddress = new Uri("https://localhost:11111/ServerAPI")
};
client.DefaultRequestHeaders
.Accept
.Add(new MediaTypeWithQualityHeaderValue("application/xml"));//ACCEPT head
using (var httpClient = new HttpClient())
{
//httpClient.DefaultRequestHeaders.Accept.Clear();
var request = new HttpRequestMessage()
{
RequestUri = new Uri("https://localhost:44386/ServerAPI"),
Method = HttpMethod.Get,
};
request.Headers.Add("X-ARR-ClientCert", clientCert.GetRawCertDataString());
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));//ACCEPT head
//using (var response = await httpClient.GetAsync("https://localhost:11111/ServerAPI"))
using (var response = await httpClient.SendAsync(request))
{
if (response.StatusCode == System.Net.HttpStatusCode.OK)
{
string apiResposne = await response.Content.ReadAsStringAsync();
weatherForecastList = JsonConvert.DeserializeObject<List<WeatherForecast>>(apiResposne);
}
}
}
return weatherForecastList;
}
authentication class
public static X509Certificate2 GetClientCertificate()
{
X509Store userCaStore = new X509Store(StoreName.TrustedPeople, StoreLocation.CurrentUser);
try
{
string str_API_Cert_Thumbprint = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
userCaStore.Open(OpenFlags.ReadOnly);
X509Certificate2Collection certificatesInStore = userCaStore.Certificates;
X509Certificate2Collection findResult = certificatesInStore.Find(X509FindType.FindByThumbprint, str_API_Cert_Thumbprint, false);
X509Certificate2 clientCertificate = null;
if (findResult.Count == 1)
{
clientCertificate = findResult[0];
if(System.DateTime.Today >= System.Convert.ToDateTime(clientCertificate.GetExpirationDateString()))
{
throw new Exception("Certificate has already been expired.");
}
else if (System.Convert.ToDateTime(clientCertificate.GetExpirationDateString()).AddDays(-30) <= System.DateTime.Today)
{
throw new Exception("Certificate is about to expire in 30 days.");
}
}
else
{
throw new Exception("Unable to locate the correct client certificate.");
}
return clientCertificate;
}
catch (Exception ex)
{
throw;
}
finally
{
userCaStore.Close();
}
}
Server/target api code
[HttpGet]
public IEnumerable<WeatherForecast> Getcertdata()
{
IHeaderDictionary headers = base.Request.Headers;
X509Certificate2 clientCertificate = null;
string certHeaderString = headers["X-ARR-ClientCert"];
if (!string.IsNullOrEmpty(certHeaderString))
{
//byte[] bytes = Encoding.ASCII.GetBytes(certHeaderString);
//byte[] bytes = Convert.FromBase64String(certHeaderString);
//clientCertificate = new X509Certificate2(bytes);
clientCertificate = new X509Certificate2(WebUtility.UrlDecode(certHeaderString));
var serverCertificate = new X509Certificate2(Path.Combine("abc.pfx"), "pwd");
if (clientCertificate.Thumbprint == serverCertificate.Thumbprint)
{
//Valida Cert
}
}
var rng = new Random();
return Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = DateTime.Now.AddDays(index),
TemperatureC = rng.Next(-20, 55),
Summary = Summaries[rng.Next(Summaries.Length)]
}).ToArray();
//return new List<WeatherForecast>();
}
You have much more problems here, the code is significantly flawed and insecure in various ways. Let's explain each issue:
HttpClient in using clause in client side controller logic
Although you expect to wrap anything that implements IDisposable in using statement. However, it is not really the case with HttpClient. Connections are not closed immediately. And with every request to client controller action, a new connection is established to remote endpoint, while previous connections sit in TIME_WAIT state. Under certain constant load, your HttpClient will exhaust TCP port pool (which is limited) and any new attempt to create a new connection will throw an exception. Here are more details on this problem: You're using HttpClient wrong and it is destabilizing your software
Microsoft recommendation is to re-use existing connections. One way to do this is to Use IHttpClientFactory to implement resilient HTTP requests. Microsoft article talks a bit about this problem:
Though this class implements IDisposable, declaring and instantiating
it within a using statement is not preferred because when the
HttpClient object gets disposed of, the underlying socket is not
immediately released, which can lead to a socket exhaustion problem.
BTW, you have created a client variable, but do not use it in any way.
Ignore certificate validation problems
The line:
requestHandler.ServerCertificateCustomValidationCallback += (sender, cert, chain, sslPolicyErrors) => true;
make you vulnerable to MITM attack.
you are doing client certificate authentication wrong
The line:
request.Headers.Add("X-ARR-ClientCert", clientCert.GetRawCertDataString());
It is not the proper way how to do client cert authentication. What you literally doing is passing certificate's public part to server. That's all. You do not prove private key possession which is required to authenticate you. The proper way to do so is:
requestHandler.ClientCertificates.Add(clientCert);
This will force client and server to perform proper client authentication and check if you possess the private key for certificate you pass (it is done in TLS handshake automatically). If you have ASP.NET on server side, then you read it this way (in controller action):
X509Certificate2 clientCert = Request.HttpContext.Connection.ClientCertificate
if (clientCert == null) {
return Unauthorized();
}
// perform client cert validation according server-side rules.
Non-standard cert store
In authentication class you open StoreName.TrustedPeople store, while normally it should be StoreName.My. TrustedPeople isn't designed to store certs with private key. It isn't a functional problem, but it is bad practice.
unnecessary try/catch clause in authentication class
If you purposely throw exceptions in method, do not use try/catch. In your case you simply rethrow exception, thus you are doing a double work. And this:
throw new Exception("Certificate is about to expire in 30 days.");
is behind me. Throwing exception on technically valid certificate? Really?
server side code
As said, all this:
IHeaderDictionary headers = base.Request.Headers;
X509Certificate2 clientCertificate = null;
string certHeaderString = headers["X-ARR-ClientCert"];
if (!string.IsNullOrEmpty(certHeaderString))
{
//byte[] bytes = Encoding.ASCII.GetBytes(certHeaderString);
//byte[] bytes = Convert.FromBase64String(certHeaderString);
//clientCertificate = new X509Certificate2(bytes);
clientCertificate = new X509Certificate2(WebUtility.UrlDecode(certHeaderString));
var serverCertificate = new X509Certificate2(Path.Combine("abc.pfx"), "pwd");
if (clientCertificate.Thumbprint == serverCertificate.Thumbprint)
{
//Valida Cert
}
}
must be replaced with:
X509Certificate2 clientCert = Request.HttpContext.Connection.ClientCertificate
if (clientCert == null) {
return Unauthorized();
}
// perform client cert validation according server-side rules.
BTW:
var serverCertificate = new X509Certificate2(Path.Combine("abc.pfx"), "pwd");
if (clientCertificate.Thumbprint == serverCertificate.Thumbprint)
{
//Valida Cert
}
This is another disaster in your code. You are loading the server certificate from PFX just to compare their thumbprints? So, you suppose that client will have a copy of server certificate? Client and server certificates must not be the same. Next thing is you are generating a lot of copies of server certificate's private key files. More private key files you generate, the slower the process is and you just generate a lot of garbage. More details on this you can find in my blog post: Handling X509KeyStorageFlags in applications
I'm trying to access the Azure Elastic Scale Split/Merge tool from an ASP.NET application. I can open the page in my browser after I use the certificate that I uploaded on Azure. But when I try to connect to the page in ASP.NET I keep getting 500 Internal Server Error, even though I used the certificate in my request.
Is there something wrong with the code below? Have I been forgetting something?
var handler = new WebRequestHandler();
handler.ServerCertificateValidationCallback = delegate { return true; };
handler.ClientCertificateOptions = ClientCertificateOption.Manual;
handler.ClientCertificates.Add(Cert); //Cert is the X509Certificate2 I use
using (var client = new HttpClient(handler))
{
try
{
var response = await client.GetAsync(Endpoint); //Endpoint = https://foobar.cloudapp.net/
if (response.IsSuccessStatusCode)
{
var a = response.Content.ReadAsStringAsync();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
throw;
}
}
Found where I went wrong. When creating the Certificate I was using the .cer file, but it works now with the .pfx file
I'm porting a website to dnx core/aspnet5/mvc6. I need to store passwords to 3rd party sites in the database (it's essentially an aggregator).
In earlier versions of mvc, I did this using classes like RijndaelManaged. But those don't appear to exist in dnx core. In fact, I haven't been able to find much documentation on any general purpose encryption/decryption stuff in dnx core.
What's the recommended approach for encrypting/decrypting single field values in an mvc6 site? I don't want to encrypt the entire sql server database.
Or should I be looking at a different approach for storing the credentials necessary to access a password-protected 3rd party site?
See the DataProtection API documentation
Their guidance on using it for persistent data protection is a little hedgy but they say there is no technical reason you can't do it. Basically to store protected data persistently you need to be willing to allow unprotecting it with expired keys since the keys could expire after you protect it.
To me it seems reasonable to use it and I am using it in my own project.
Since the IPersistedDataProtector only provides methods with byte arrays I made a couple of extension methods to convert the bytes back and forth from string.
public static class DataProtectionExtensions
{
public static string PersistentUnprotect(
this IPersistedDataProtector dp,
string protectedData,
out bool requiresMigration,
out bool wasRevoked)
{
bool ignoreRevocation = true;
byte[] protectedBytes = Convert.FromBase64String(protectedData);
byte[] unprotectedBytes = dp.DangerousUnprotect(protectedBytes, ignoreRevocation, out requiresMigration, out wasRevoked);
return Encoding.UTF8.GetString(unprotectedBytes);
}
public static string PersistentProtect(
this IPersistedDataProtector dp,
string clearText)
{
byte[] clearBytes = Encoding.UTF8.GetBytes(clearText);
byte[] protectedBytes = dp.Protect(clearBytes);
string result = Convert.ToBase64String(protectedBytes);
return result;
}
}
I also created a helper class specifically for protecting certain properties on my SiteSettings object before it gets persisted to the db.
using cloudscribe.Core.Models;
using Microsoft.AspNet.DataProtection;
using Microsoft.Extensions.Logging;
using System;
namespace cloudscribe.Core.Web.Components
{
public class SiteDataProtector
{
public SiteDataProtector(
IDataProtectionProvider dataProtectionProvider,
ILogger<SiteDataProtector> logger)
{
rawProtector = dataProtectionProvider.CreateProtector("cloudscribe.Core.Models.SiteSettings");
log = logger;
}
private ILogger log;
private IDataProtector rawProtector = null;
private IPersistedDataProtector dataProtector
{
get { return rawProtector as IPersistedDataProtector; }
}
public void Protect(ISiteSettings site)
{
if (site == null) { throw new ArgumentNullException("you must pass in an implementation of ISiteSettings"); }
if (site.IsDataProtected) { return; }
if (dataProtector == null) { return; }
if (site.FacebookAppSecret.Length > 0)
{
try
{
site.FacebookAppSecret = dataProtector.PersistentProtect(site.FacebookAppSecret);
}
catch (System.Security.Cryptography.CryptographicException ex)
{
log.LogError("data protection error", ex);
}
}
// ....
site.IsDataProtected = true;
}
public void UnProtect(ISiteSettings site)
{
bool requiresMigration = false;
bool wasRevoked = false;
if (site == null) { throw new ArgumentNullException("you must pass in an implementation of ISiteSettings"); }
if (!site.IsDataProtected) { return; }
if (site.FacebookAppSecret.Length > 0)
{
try
{
site.FacebookAppSecret = dataProtector.PersistentUnprotect(site.FacebookAppSecret, out requiresMigration, out wasRevoked);
}
catch (System.Security.Cryptography.CryptographicException ex)
{
log.LogError("data protection error", ex);
}
catch (FormatException ex)
{
log.LogError("data protection error", ex);
}
}
site.IsDataProtected = false;
if (requiresMigration || wasRevoked)
{
log.LogWarning("DataProtection key wasRevoked or requires migration, save site settings for " + site.SiteName + " to protect with a new key");
}
}
}
}
If the app will need to migrate to other machines after data has been protected then you also want to take control of the key location, the default would put the keys on the OS keyring of the machine as I understand it so a lot like machinekey in the past where you would override it in web.config to be portable.
Of course protecting the keys is on you at this point. I have code like this in the startup of my project
//If you change the key persistence location, the system will no longer automatically encrypt keys
// at rest since it doesn’t know whether DPAPI is an appropriate encryption mechanism.
services.ConfigureDataProtection(configure =>
{
string pathToCryptoKeys = appBasePath + Path.DirectorySeparatorChar
+ "dp_keys" + Path.DirectorySeparatorChar;
// these keys are not encrypted at rest
// since we have specified a non default location
// that also makes the key portable so they will still work if we migrate to
// a new machine (will they work on different OS? I think so)
// this is a similar server migration issue as the old machinekey
// where we specified a machinekey in web.config so it would not change if we
// migrate to a new server
configure.PersistKeysToFileSystem(new DirectoryInfo(pathToCryptoKeys));
});
So my keys are stored in appRoot/dp_keys in this example.
If you want to do things manually;
Add a reference to System.Security.Cryptography.Algorithms
Then you can create instances of each algorithm type via the create method. For example;
var aes = System.Security.Cryptography.Aes.Create();
I'm trying to connect to a CRM 2011 Online environment. I'm able to connect via a "Console Application", but when I'm trying to connect via an "ASP.net"-application with the same code, it doesn't work, it gives me the "Authentication Failure"-error ({"An unsecured or incorrectly secured fault was received from the other party. See the inner FaultException for the fault code and detail."}).
Is there something special we need to do to make it work on an "ASP.net" environment. I tested out several solutions I found on the internet, but all gives me the same error.
A "code"-snippet of my simplified code:
private static ClientCredentials GetDeviceCredentials()
{
return Microsoft.Crm.Services.Utility.DeviceIdManager.LoadOrRegisterDevice();
}
protected void Button1_Click(object sender, EventArgs e)
{
//Authenticate using credentials of the logged in user;
string UserName = "*****"; //your Windows Live ID
string Password = "*****"; // your password
ClientCredentials Credentials = new ClientCredentials();
Credentials.UserName.UserName = UserName;
Credentials.UserName.Password = Password;
Credentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials;
//This URL needs to be updated to match the servername and Organization for the environment.
Uri OrganizationUri = new Uri("https://*****.crm4.dynamics.com/XRMServices/2011/Organization.svc"); //this URL could copy from Setting --> Developer Source
Uri HomeRealmUri = null;
//OrganizationServiceProxy serviceProxy;
using (OrganizationServiceProxy serviceProxy = new OrganizationServiceProxy(OrganizationUri, HomeRealmUri, Credentials, GetDeviceCredentials()))
{
IOrganizationService service = (IOrganizationService)serviceProxy;
OrganizationServiceContext orgContext = new OrganizationServiceContext(service);
var theAccounts = orgContext.CreateQuery<Account>().Take(1).ToList();
Response.Write(theAccounts.First().Name);
}
}
I tried several things, like deleting the content of "LiveDeviceID"-folder an re-running the device registration tool. but is weird that it works in the "console application" but not on my "asp.net"-solution...
PS : I am able to generate the "context"-file via crmsvcutil.exe /url:https://org.crm4.dynamics.com/XRMServices/2011/Organization.svc /o:crm.cs /u:username /p:password /di:deviceUserName /dp:devicPWD
Is there any particular reason you have
Credentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials;
You shouldn't need that line for windows live authentication.
Even with that the code seems valid so it is something to do with the Device Registration. I suggest rather than just call it directly like you have
using (OrganizationServiceProxy serviceProxy = new OrganizationServiceProxy(OrganizationUri, HomeRealmUri, Credentials, GetDeviceCredentials()))
{
You try something like the following because you only need to register once:
ClientCredentials deviceCredentials;
if ((CRMSettings.Default.DeviceID == String.Empty) || (CRMSettings.Default.DevicePassword == String.Empty))
{
deviceCredentials = Microsoft.Crm.Services.Utility.DeviceIdManager.RegisterDevice();
}
else
{
deviceCredentials = new ClientCredentials();
deviceCredentials.UserName.UserName = CRMSettings.Default.DeviceID;
deviceCredentials.UserName.Password = CRMSettings.Default.DevicePassword;
}
using (OrganizationServiceProxy serviceProxy = new OrganizationServiceProxy(OrganizationUri, HomeRealmUri, Credentials, deviceCredentials))
{
I have had issues in the past where I get an "already registered" response from the RegisterDevice call.
I would also dump out the Device ID and Password so you can see if they are being set.
Basically I'm running the same problem as this post Accessing mapped drives when impersonating in ASP.NET
I'm working on a legacy website and I need to allow the admins to change the site's logo, banners, etc, from an image file on their desktops to a mapped drive on the server.
So, their website is using impersonation whenever it needs to save on the drive, and it's working just fine; however I can't manage to make it work on their test environment nor in my test environment.
¿Any ideas? I've double checked user and password (the code doesn't specify domain) and that's not the issue.
Here's an excerpt from the code that handles impersonation:
public bool ImpersonateUser(String user, String password, String domain)
{
WindowsIdentity tempWindowsIdentity;
IntPtr token = IntPtr.Zero;
IntPtr tokenDuplicate = IntPtr.Zero;
if (RevertToSelf())
{
if (LogonUserA(user, domain, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref token) != 0)
{
if (DuplicateToken(token, 2, ref tokenDuplicate) != 0)
{
tempWindowsIdentity = new WindowsIdentity(tokenDuplicate);
impersonationContext = tempWindowsIdentity.Impersonate();
if (impersonationContext != null)
{
CloseHandle(token);
CloseHandle(tokenDuplicate);
return true;
}
}
}
}
//... rest of the code
And a -sanitized- test:
if (impUtility.ImpersonateUser("user", "password", string.Empty))
{
fu.SaveAs(#"C:\Images\" + imgName);
}
I couldn't get that to work either.
Then I realized that even if I could implement it, there is an easier way.
What I did was share the folder on the target machine, and give only read/write permissions to the users that would be using my application.
//Impersonate user to save file on server
WindowsIdentity wi = (WindowsIdentity)User.Identity;
WindowsImpersonationContext wic = null;
try
{
wic = wi.Impersonate();
if (wi.IsAuthenticated)
asyncFileUpload.SaveAs(location);
}
catch (Exception ex)
{
//Log Error or notify here
success = false;
}
finally
{
if (wic != null)
wic.Undo();
}
I created an AD group for the users, and give read/write permissions for those users on the hidden shared drive. This makes it easier to maintain, since I don't have to create mapped drives for each user.