I've read other similar answers but they either use IIS, talk about self-signed certificates or don't fill the purpose at all.
I'm trying to create a simple web API that will be hosted in a Windows machine with SQL Express using .NET 5.
I'm able to create a self-signed certificate and use that during development, but this will be hosted in a client's machine and they probably have a SSL certificate. In the past, with web applications that run only in localhost I did something like this:
public static IHostBuilder CreateHostBuilder ( string [] args ) =>
Host.CreateDefaultBuilder( args )
.ConfigureWebHostDefaults( builder =>
{
builder.UseStartup<Startup>();
builder.UseKestrel(options => {
if ( ServerConfiguration.ShouldUseHttps() )
{
options.Listen( IPAddress.Any, 6050, listenOptions =>
{
listenOptions.UseHttps( Path.Combine( "Certificates", "cert.pfx" ), CertificatePassword );
} );
}
else
{
options.Listen( IPAddress.Any, 6050 );
}
} );
} );
Where cert.pfx is my self-signed certificate. I would ship that certificate with the software, and tell the client to install it, then they can use HTTPS and the browser would trust the certificate. Probably enough for a localhost application, but not for an exposed API.
So let's say the client has bought an SSL certificate and I want my .NET application to use that certificate, that will be installed on the same machine as my application. How can I accomplish that?
Right now, I've just deployed my application in another computer, without any certificates or anything else, but of course I get errors of type "The SSL certificate can't be trusted" (in postman for example).
If the client doesn't buy an SSL certificate, can we use a self-signed certificate?
Thank you very much.
Don't configure your endpoints in code. Instead, configure them in your appsettings.json file, as described in the Kestrel documentation. You can configure just one endpoint, or multiple.
Here's an example configuration that has an HTTP and HTTPS endpoint, with the certificate from a pfx file with a password:
{
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://localhost:5000"
},
"HttpsInlineCertFile": {
"Url": "https://localhost:5001",
"Certificate": {
"Path": "<path to .pfx file>",
"Password": "<certificate password>"
}
}
}
}
}
The documentation shows different configurations for the certificate, like a .pem and key file (like you get from Let's Encrypt, for example) or using the Windows certificate store.
This way, if the client gets their own cert, it's just a matter of updating the appsettings.json.
Related
I deploy the ASP.NET Angular SPA template RC2 app with the Identity Server option on my local IIS server. I get below exception. IIS can't access my local certification info.
Application: w3wp.exe
CoreCLR Version: 6.0.21.52210
.NET Version: 6.0.0
Description: The process was terminated due to an unhandled exception.
Exception Info: System.InvalidOperationException: Couldn't find a valid certificate with subject 'CN=my-subdomain.azurewebsites.net' on the 'CurrentUser\My'
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.SigningKeysLoader.LoadFromStoreCert(String subject, String storeName, StoreLocation storeLocation, DateTimeOffset currentTime)
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.LoadKey()
at Microsoft.AspNetCore.ApiAuthorization.IdentityServer.ConfigureSigningCredentials.Configure(ApiAuthorizationOptions options)
at Microsoft.Extensions.Options.OptionsFactory`1.Create(String name)
at Microsoft.Extensions.Options.UnnamedOptionsManager`1.get_Value()
at Microsoft.Extensions.DependencyInjection.IdentityServerBuilderConfigurationExtensions.<>c.<AddClients>b__8_1(IServiceProvider sp)
at Microsoft.Extensions.DependencyInjection.ServiceLookup.CallSiteVisitor`2.VisitCallSiteMain(ServiceCallSite callSite, TArgument argument)
I created a certificate via
New-SelfSignedCertificate -DnsName "my-subdomain.azurewebsites.net" -CertStoreLocation "cert:\CurrentUser\My"
I set my appsettings.json as
"IdentityServer": {
"Key": {
"Type": "Store",
"StoreName": "My",
"StoreLocation": "CurrentUser",
"Name": "CN=my-subdomain.azurewebsites.net"
},
"Clients": {
"MarketPlace": {
"Profile": "IdentityServerSPA"
}
}
When I publish this site with the same configuration as folder publish, it works properly. How can I fix the IIS error so it can access the certificate?
I had a similar issue with .NetCore web API (.NetCore 5.0) implemented using CLEAN architecture (CQRS pattern), I needed 2 certificates to run my application.
CA issued certificate for external communication outside the scope of the Application Level
Self-Signed for communication within the Application Level, for example, communication with other services/class library services or maybe to DB.
(For internal communication app will look for a certificate created at the Domain level or Server level)
So in the app setting production JSON, we need to point to a self-signed certificate, while publishing in the application in IIS we need to select CA issued certificate from the SSL Certificate dropdown in "Edit Binding" screen.
point 1: CA issued certificate shall be imported to IIS.
point 2: while self-signed shall be present in MMC, personal or root or web hosting, In local machine or current user, the same shall be referred in app setting production JSON.
Clean Architecture:
https://github.com/jasontaylordev/CleanArchitecture
For some reasons our infra blocks mqtt.googleapis.com. That's why was deployed nginx proxy with such configuration
stream {
upstream google_mqtt {
server mgtt.googleapis.com:8883;
}
server {
listen 8883;
proxy_pass google_mqtt;
}
}
Also it has external IP with domain name fake.mqtt.com
Using example here I'm testing connectivity.
If script to run against mgtt.googleapis.com:8883 everything works fine.
But if domain switch to fake.mqtt.com got an error:
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate is not valid for 'fake.mqtt.com'
For client implementation was used paho.mqtt.client.
Auth to mqtt broker realized with JWT.
def create_jwt(project_id, private_key_file, algorithm):
token = {
# The time that the token was issued at
"iat": datetime.datetime.utcnow(),
# The time the token expires.
"exp": datetime.datetime.utcnow() + datetime.timedelta(minutes=20),
# The audience field should always be set to the GCP project id.
"aud": project_id,
}
# Read the private key file.
with open(private_key_file, "r") as f:
private_key = f.read()
print(
"Creating JWT using {} from private key file {}".format(
algorithm, private_key_file
)
)
return jwt.encode(token, private_key, algorithm=algorithm)
Set JWT
client.username_pw_set(
username='unused',
password=create_jwt(project_id, private_key_file, algorithm))
TLS configuration:
client.tls_set(ca_certs='roots.pem', tls_version=ssl.PROTOCOL_TLSv1_2,)
Could you advise what to configure on nginx/paho-client side and is it working solution at all?
Or may be 3party brokers can connect to mqtt.googleapis.com? (from information i read here and on another resources - no)
You can not just arbitrarily change the domain name if you are just stream proxying, it needs to match the one presented in the certificate by the remote broker or as you have seen it will not validate.
You can force the client to not validate the server name by setting client.tls_insecure_set(True) but this is a VERY bad idea and should only be used for testing and never in production.
In my BlazorWebAssembly + ASP.NET Core Identity test site (.NET 5.0 RC1), I'm getting the following error when trying to log in.
There was an error trying to log you in: 'Network Error'
I have already set appsettings OIDC to be the following:
{
"SiteName": "MyProject",
"oidc": {
"Authority": "https://167.172.118.170/",
"ClientId": "MyProject",
"DefaultScopes": [
"openid",
"profile"
],
"PostLogoutRedirectUri": "/",
"RedirectUri": "https://167.172.118.170/authentication/login-callback",
"ResponseType": "code"
}
}
Why is it not able to connect?
Test site is at http://167.172.118.170/ and the code can be found in https://github.com/jonasarcangel/BlazorLoginNetworkErrorIssue
It is clear by now that Blazor uses the internal url http://localhost:5008 as the authority instead of the external url http://167.172.118.170/
When the client attempts to connect to http://localhost:5008/.well-known/openid-configuration, an error occurs: connection refused...
As a matter of fact the client should use this url: http://167.172.118.170/.well-known/openid-configuration, but it does not as the value of the authority is determined by Blazor.
If you type the url http://167.172.118.170/.well-known/openid-configuration in the browser's address bar, you'll see all the configuration information about the Identity Provider. Indeed, http://167.172.118.170/ is the authority. But as you've seen setting the Authority to this url in the appsettings.json file was simply ignored, and the internal url was used instead.
How to solve this ? We should tell Blazor not to use the internal url but the external one...
Attempts suggested:
In the web server project's Startup class's ConfigureService change this code:
services.AddIdentityServer()
.AddApiAuthorization<ApplicationUser, ApplicationIdentityDbContext>
();
To
services.AddIdentityServer(options =>
{
options.IssuerUri = "https://167.172.118.170/";
})
.AddApiAuthorization<ApplicationUser,
ApplicationIdentityDbContext>();
Use the ForwardedHeaders middleware. See this sample as to how to do it.
Stick to the above... The issue is here, and not somewhere else.
Good luck...
I just test and the url http://167.172.118.170/_configuration/BlazorWorld.Web.Client returns
{
"authority": "http://localhost:5008",
"client_id": "BlazorWorld.Web.Client",
"redirect_uri": "http://localhost:5008/authentication/login-callback",
"post_logout_redirect_uri": "http://localhost:5008/authentication/logout-callback",
"response_type": "code",
"scope": "BlazorWorld.Web.ServerAPI openid profile"
}
Then the app try to connect to http://localhost:5008/.well-known/openid-configuration :
So the deployed appsettings is probably not the good one.
I am hosting on Azure and have it configured to only allow https. The backend is running ASP .NET Core in a Linux container. The webserver (Kestrel) is running without https enabled. I've configured Azure TLS/SSL settings to force https, so when users connect from the public internet, they have to use https. I have a cert that is signed by a cert authority and it's configured in the Azure App Service -> TLS/SSL -> Bindings settings.
However in my local development environment I've been running webpack using http. So when I test I connect to localhost:8080 and this is redirected to localhost:8085 by webpack. localhost:8085 is the port Kestrel is listening on. I've decided I want to develop locally using https so that my environment mimics the production environment closely. To this I've started the webpack-dev-server with the --https command line option, and ammended my redirects in my webpack.config.js
For example:
'/api/*': {
target: 'https://localhost:' + (process.env.SERVER_PROXY_PORT || "8085"),
changeOrigin: true,
secure: false
},
This redirects https requests to port 8085.
I've created a self-signed cert for use by Kestrel when developing locally. I modified my code to use this certificate as shown below:
let configure_host (settings_file : string) (builder : IWebHostBuilder) =
//turns out if you pass an anonymous function to a function that expects an Action<...> or
//Func<...> the type inference will work out the inner types....so you don't need to specify them.
builder.ConfigureAppConfiguration((fun ctx config_builder ->
config_builder
.SetBasePath(Directory.GetCurrentDirectory())
.AddEnvironmentVariables()
.AddJsonFile(settings_file, false, true)
.Build() |> ignore))
.ConfigureKestrel(fun ctx opt ->
eprintfn "JWTIssuer = %A" ctx.Configuration.["JWTIssuer"]
eprintfn "CertificateFilename = %A" ctx.Configuration.["CertificateFilename"]
let certificate_file = (ctx.Configuration.["CertificateFilename"])
let certificate_password = (ctx.Configuration.["CertificatePassword"])
let certificate = new X509Certificate2(certificate_file, certificate_password)
opt.AddServerHeader <- false
opt.Listen(IPAddress.Loopback, 8085, (fun opt -> opt.UseHttps(certificate) |> ignore)))
.UseUrls("https://localhost:8085") |> ignore
builder
This all works, and I can connect to webpack locally and it redirects the request to the webserver using https. The browser complains that the cert is insecure because it's self-signed but that was expected.
My question is how should this be setup in the production environment. I don't want to be running the container on azure with the certificates I created locally embeded in the image. In my production environment should I be configuring Kestrel, as I have done with the localhost code, to use the cert in loaded into Azure (as mentioned in the 1st paragraph)? Or is simply binding it to the domain using the portal and forcing https via the Web UI enough?
On Azure, If you have the PFX certificate you can choose to upload the certificate:
see this image
However, this certificate needs to come from a trusted certificate authority.
If the URL is a subdomain, you can choose a Free App Service Managed Certificate.
After, that all you need to do is enable https only in the portal.
If its a naked domain and you really need the certificate to be free, you can get a certificate from sslforfree.com. sslforfree will give you the .cer file and the private key you will need to generate a pfx.
I am using Visual Studio 2015 Enterprise and ASP.NET vNext Beta8 to build an endpoint that both issues and consumes JWT tokens as described in detail here. As explained in that article the endpoint uses AspNet.Security.OpenIdConnect.Server (AKA OIDC) to do the heavy lifting.
While standing this prototype up in our internal development environment we have encountered a problem using it with a load balancer. In particular, we think it has to do with the "Authority" setting on app.UseJwtBearerAuthentication and our peculiar mix of http/https. With our load balanced environment, any attempt to call a REST method using the token yields this exception:
WebException: The remote name could not be resolved: 'devapi.contoso.com.well-known'
HttpRequestException: An error occurred while sending the request.
IOException: IDX10804: Unable to retrieve document from: 'https://devapi.contoso.com.well-known/openid-configuration'.
Consider the following steps to reproduce (this is for prototyping and should not be considered production worthy):
We created a beta8 prototype using OIDC as described here.
We deployed the project to 2 identically configured IIS 8.5 servers running on Server 2012 R2. The IIS servers host a beta8 site called "API" with bindings to port 80 and 443 for the host name "devapi.contoso.com" (sanitized for purposes of this post) on all available IP addresses.
Both IIS servers have a host entry that point to themselves:
127.0.0.1 devapi.contoso.com
Our network admin has bound a * certificate (*.contoso.com) with our Kemp load balancer and configured the DNS entry for https://devapi.contoso.com to resolve to the load balancer.
Now this is important, the load balancer has also been configured to proxy https traffic to the IIS servers using http (not, repeat, not on https). It has been explained to me that this is standard operating procedure for our company because they only have to install the certificate in one place. We're not sure why our network admin bound 443 in IIS since it, in theory, never receives any traffic on this port.
We make a secure post via https to https://devapi.contoso.com/authorize/v1 to fetch a token, which works fine (the details of how to make this post are here ):
{
"sub": "todo",
"iss": "https://devapi.contoso.com/",
"aud": "https://devapi.contoso.com/",
"exp": 1446158373,
"nbf": 1446154773
}
We then use this token in another secure get via https to https://devapi.contoso.com/values/v1/5.
OpenIdConnect.OpenIdConnectConfigurationRetriever throws the exception:
WebException: The remote name could not be resolved: 'devapi.contoso.com.well-known'
HttpRequestException: An error occurred while sending the request.
IOException: IDX10804: Unable to retrieve document from: 'https://devapi.contoso.com.well-known/openid-configuration'.
We think this is happening because OIDC is attempting to consult the host specified in "options.Authority", which we set at startup time to https://devapi.contoso.com/. Further we speculate that because our environment has been configured to translate https traffic to non https traffic between the load balancer and IIS something is going wrong when the framework tries to resolve https://devapi.contoso.com/. We have tried many configuration changes including even pointing the authority to non-secure http://devapi.contoso.com to no avail.
Any assistance in helping us understand this problem would be greatly appreciated.
#Pinpoint was right. This exception was caused by the OIDC configuration code path that allows IdentityModel to initiate non-HTTPS calls. In particular the code sample we were using was sensitive to missing trailing slash in the authority URI. Here is a code fragment that uses the Uri class to combine paths in a reliable way, regardless of whether the Authority URI has a trailing slash:
public void Configure(IApplicationBuilder app, IOptions<AppSettings> appSettings)
{
.
.
.
// Add a new middleware validating access tokens issued by the OIDC server.
app.UseJwtBearerAuthentication
(
options =>
{
options.AuthenticationScheme = JwtBearerDefaults.AuthenticationScheme ;
options.AutomaticAuthentication = false ;
options.Authority = new Uri(appSettings.Value.AuthAuthority).ToString() ;
options.Audience = new Uri(appSettings.Value.AuthAuthority).ToString() ;
// Allow IdentityModel to use HTTP
options.ConfigurationManager =
new ConfigurationManager<OpenIdConnectConfiguration>
(
metadataAddress : new Uri(new Uri(options.Authority), ".well-known/openid-configuration").ToString(),
configRetriever : new OpenIdConnectConfigurationRetriever() ,
docRetriever : new HttpDocumentRetriever { RequireHttps = false }
);
}
);
.
.
.
}
In this example we're pulling in the Authority URI from config.json via "appSettings.Value.AuthAuthority" and then sanitizing/combining it using the Uri class.