I am trying to implement key rollover in IdentityServer but it seems the keys are only configured during startup.
The Crypto docs say that I should use AddValidationKeys. I can find out how to use them during startup in the startup docs and that works perfectly fine.
Is it possible to use AddValidationKeys during runtime to handle key rollover so I do not have to restart the service to roll over the keys?
It should be possible, but not out of the box. Identity Server 4 is extremely extensible and in regards to signing keys, it uses ISigningCredentialStore to retrieve the configured token signing keys. By default, it looks like it just injects and returns whatever you configure in the Startup.
You will need to create your own implementation of ISigningCredentialStore and add it to the DI. Identity Server 4 should then use the store to dynamically retrieve keys at runtime based on your business logic.
public class CustomSigningCredentialsStore : ISigningCredentialStore
{
public Task<SigningCredentials> GetSigningCredentialsAsync()
{
// Your business logic to retrieve signing keys at runtime
}
}
Related
In a nutshell I'm in the process of upgrading a .NETStandard 2.1 app to .NET 6. Plus upgrading the various libraries accordingly, in particular MassTransit v5 to v8, and AutoFac 4.9.4 to 6.4.0.
This is a multi-tenant application where one instance is shared by multiple tenants, and each tenant has their own database.
The upgrade has gone well apart from one snag. The application uses the, no longer available, AutofacReceivedEndpointExtensions to setup the Tenant details in the Consumer, and I am struggling to find a way to replicate the functionally it provides.
Below is the key bit of code.
config.ReceiveEndpoint(host, azureBusConfig.QueueName, endpoint =>
{
ConfigureConsumer<MyConsumer>(endpoint, componentContext);
});
private static void ConfigureConsumer<TConsumer>(IServiceBusReceiveEndpointConfigurator endpoint, IComponentContext componentContext, Action<IConsumerConfigurator<TConsumer>> configure = null)
where TConsumer : class, IConsumer
{
endpoint.Consumer(componentContext, configure, configureScope: (container, context) =>
{
var tenantName = context.Headers.Get<string>("tenant");
var userId = context.Headers.Get<int>("userId");
container.RegisterInstance(new NamedTenantInfoProvider(tenantName, userId)).As<ITenantInfoProvider>();
});
}
The endpoint.Consumer method as shown is no longer provided.
The ITenantInfoProvider interface is injected into various constructors in the application e.g., to setup the dbContext for a tenant to point to the correct database.
public interface ITenantInfoProvider
{
string GetTenantName();
int? GetUserId();
}
There are two implementations of the ITenantInfoProvider. The NamedTenantInfoProvider which is used to set the Tenant from the received message, above.
There is also a RequestTenantInfoProvider, that gets the Tenant from the HttpRequest. e.g. via api call.
The RequestTenantInfoProvider is registered as follows
builder.RegisterType<RequestTenantInfoProvider>()
.As<ITenantInfoProvider>()
.InstancePerLifetimeScope();
So, what should happen is that the RequestTenantInfoProvider is injected into the constructors by default, but when a message is being consumed the NamedTenantInfoProvider instance is injected instead.
I have tried to register the NamedTenantInfoProvider as per the RequestTenantInfoProvider. Then inject an IEnumerable into the constructors. And set the Tenant in the consumer.ConfigureConsumer on the Named instance. Then use which ever instance has a Tenant set in the code. However, the NamedTenantInfoProvider instance is set after it is required in the other constructors e.g., dbContext.
The only way I can get the application to fully work is to hardcode a Tenant name in the NamedTenentInfoProvider class.
I was hoping that someone has already refactored some similiar code to replace the endpoint.ConfigureConsumer call and can advise a solution.
It may be that I'm missing a bit of knowledge regarding how scoping works with the Microsoft Dependency Injection/MassTransit configuration. Note: I didn't write the original application, and this is my first dabble with Mass Transit as well.
MassTransit v8 (and onward) only support IServiceCollection, which is part of Microsoft.Extensions.DependencyInjection. Third-party containers are no longer directly supported.
There is a Scoped Filter sample that might help you understand how scopes work with MSDI. The token concept is similar to that used by developers injecting "tenant info" into consumers.
I need to regenerate the ServiceBus primary and secondary keys on a periodic basis. I am able to do it in the .NET framework, but i need to do it in .NET Core or .NET 6 as it will be an Azure Function with a timer trigger.
I am using Azure.Messaging.ServiceBus, but I cannot find the corresponding methods that are in Microsoft.ServiceBus.Messaging.... in order to generate the keys nor to update the rules.
Can someone please direct me to the documentation or sample code?
thanks
You can regenerate the keys using the method RegenerateKeysAsync which is available in Microsoft.Azure.Management.ServiceBus.Fluent. It regenerates the keys at namespace level.
Alternatively, you can use below code to generate new key
string newPrimaryKey = SharedAccessAuthorizationRule.GenerateRandomKey();
rule.PrimaryKey = newPrimaryKey;
Here is a sample where you can find Azure Service Bus Queues with ASP.NET Core Services.
REFERENCES:
azure-sdk-for-net/ScenarioTests.TopicsTests.CRUDAuthorizationRules.cs
Update azure service bus queue shared access policy programmatically
I have created a web api that handles the creation of jwt token based on the encrypted user details that it receives in a post request.
In addition to this STS api should also handle the population of the caching layer (Redis or Hazelcast) with all the user data present in the database. Presently I have registered the caching service using dependency injection.This will happen only once when the api is first initialized.
services.AddSingleton<ICacheService, RedisCacheService>();
And in the TokenController added the service as a parameter to initialize the CachingService class and thereby initialize the caching layer.So that when the cacheService object is fist initialized it fetches all the user rows from the database and stores it as a key value pair inside Redis/Hazelcast database.
public TokenController(
ICryptographyService cryptographyService,
crudDBContext crudDBContext,
IConfiguration configuration,
ICacheService cacheService)
{
_cryptographyService = cryptographyService;
_context = crudDBContext;
_config = configuration;
_cacheService = cacheService;
}
But the Token Controller constructor is initialized only when an endpoint is called, so i had to create a separate default [HttpGet] endpoint to ensure that the constructor is called when the STS api is first initialized so as to ensure that the cacheService object gets created and the data gets loaded to the cache.
public ActionResult<string> Get()
{
return "STS";
}
Please let me know if there is a proper way of doing this without calling an endpoint, like be able to use dependency injection but at the same time call some code without the endpoint being called.I need to use dependency injection because i should be able to switch between Redis and Hazelcast by just changing the classname in the startup.cs file.
With respect to Hazelcast and dependency injection: First you would need to use the sources and not the Hazelcast NuGet version. Next the configuration depends on if you are in a Container Environment or a Hosted Environment. In both cases configuration keys will be gathered from the same sources and in the same order, and options will be registered in the service container, and available via dependency injection
We're in the process of trying to secure our application secrets in our internal ASP.NET Framework web applications. The initial plan offered to me was to use Azure Key Vault. I began development work using my Visual Studio Enterprise subscription, and that seems to work fine, locally.
We've created a second Key Vault in our company's production environment, and again, I can use it locally, because my own AAD account has access to the vault. However, in this project (4.7.2 Web Forms web application), I don't see any means of specifying the Access Policy principal that we've created for the application.
My google-fu is failing me: is there any documentation that explains how to do this? Is this scenario -- an on-prem, ASP.NET Framework app outside of the Azure environment, accessing Key Vault for confiugation values -- even possible?
Thanks.
UPDATE: I was unable to find a solution that would allow me to use the Access Policy principal from within the "Add Connected Service" dialog. I'm somewhat surprised it's not in there, or is hidden enough to elude me. So I ended up writing my own Key Vault Secret-Reader function, similar to the marked answer. Hope this helps someone...
In this scenario, your option is to use the service principal to access the keyvault, please follow the steps below, my sample get the secret from the keyvault.
1.Register an application with Azure AD and create a service principal.
2.Get values for signing in and create a new application secret.
3.Navigate to the keyvault in the portal -> Access policies -> add the correct secret permission for the service principal.
4.Then use the code below, replace the <client-id>, <tenant-id>, <client-secret> with the values got before.
using System;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
namespace test1
{
class Program
{
static void Main(string[] args)
{
var azureServiceTokenProvider = new AzureServiceTokenProvider("RunAs=App;AppId=<client-id>;TenantId=<tenant-id>;AppKey=<client-secret>");
var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
var secret = kv.GetSecretAsync("https://keyvaultname.vault.azure.net/", "mySecret123").GetAwaiter().GetResult();
Console.WriteLine(secret);
}
}
}
I am trying to retrieve secrets from Azure Key Vault using Service Identity in an ASPNet 4.6.2 web application. I am using the code as outlined in this article. Locally, things are working fine, though this is because it is using my identity. When I deploy the application to Azure I get an exception when keyVaultClient.GetSecretAsync(keyUrl) is called.
As best as I can tell everything is configured correctly. I created a User assigned identity so it could be reused and made sure that identity had get access to secrets and keys in the KeyVault policy.
The exception is an AzureServiceTokenProviderException. It is verbose and outlines how it tried four methods to authenticate. The information I'm concerned about is when it tries to use Managed Service Identity:
Tried to get token using Managed Service Identity. Access token could
not be acquired. MSI ResponseCode: BadRequest, Response:
I checked application insights and saw that it tried to make the following connection with a 400 result error:
http://127.0.0.1:41340/MSI/token/?resource=https://vault.azure.net&api-version=2017-09-01
There are two things interesting about this:
Why is it trying to connect to a localhost address? This seems wrong.
Could this be getting a 400 back because the resource parameter isn't escaped?
In the MsiAccessTokenProvider source, it only uses that form of an address when the environment variables MSI_ENDPOINT and MSI_SECRET are set. They are not set in application settings, but I can see them in the debug console when I output environment variables.
At this point I don't know what to do. The examples online all make it seem like magic, but if I'm right about the source of the problem then there's some obscure automated setting that needs fixing.
For completeness here is all of my relevant code:
public class ServiceIdentityKeyVaultUtil : IDisposable
{
private readonly AzureServiceTokenProvider azureServiceTokenProvider;
private readonly Uri baseSecretsUri;
private readonly KeyVaultClient keyVaultClient;
public ServiceIdentityKeyVaultUtil(string baseKeyVaultUrl)
{
baseSecretsUri = new Uri(new Uri(baseKeyVaultUrl, UriKind.Absolute), "secrets/");
azureServiceTokenProvider = new AzureServiceTokenProvider();
keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
}
public async Task<string> GetSecretAsync(string key, CancellationToken cancellationToken = new CancellationToken())
{
var keyUrl = new Uri(baseSecretsUri, key).ToString();
try
{
var secret = await keyVaultClient.GetSecretAsync(keyUrl, cancellationToken);
return secret.Value;
}
catch (Exception ex)
{
/** rethrows error with extra details */
}
}
/** IDisposable support */
}
UPDATE #2 (I erased update #1)
I created a completely new app or a new service instance and was able to recreate the error. However, in all instances I was using a User Assigned Identity. If I remove that and use a System Assigned Identity then it works just fine.
I don't know why these would be any different. Anybody have an insight as I would prefer the user assigned one.
One of the key differences of a user assigned identity is that you can assign it to multiple services. It exists as a separate asset in azure whereas a system identity is bound to the life cycle of the service to which it is paired.
From the docs:
A system-assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that's trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it's enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.
A user-assigned managed identity is created as a standalone Azure resource. Through a create process, Azure creates an identity in the Azure AD tenant that's trusted by the subscription in use. After the identity is created, the identity can be assigned to one or more Azure service instances. The lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure service instances to which it's assigned.
User assigned identities are still in preview for App Services. See the documentation here. It may still be in private preview (i.e. Microsoft has to explicitly enable it on your subscription), it may not be available in the region you have selected, or it could be a defect.
To use a user-assigned identity, the HTTP call to get a token must include the identity's id.
Otherwise it will attempt to use a system-assigned identity.
Why is it trying to connect to a localhost address? This seems wrong.
Because the MSI endpoint is local to App Service, only accessible from within the instance.
Could this be getting a 400 back because the resource parameter isn't escaped?
Yes, but I don't think that was the reason here.
In the MsiAccessTokenProvider source, it only uses that form of an address when the environment variables MSI_ENDPOINT and MSI_SECRET are set. They are not set in application settings, but I can see them in the debug console when I output environment variables.
These are added by App Service invisibly, not added to app settings.
As for how to use the user-assigned identity,
I couldn't see a way to do that with the AppAuthentication library.
You could make the HTTP call manually in Azure: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http.
Then you gotta take care of caching yourself though!
Managed identity endpoints can't handle a lot of queries at one time :)