Can we get a configurationClient and/or secretClient from a configuration builder object in AzureFunction .net core? - .net-core

I was working on a project which required me to create Keyvault reference in azure AppConfiguration, add/update secrets in KeyVault and to access values in AppConfiguration using Configuration.
Currently, I'm using :
-ConfigurationClient to create key Vault reference.
-SecretClient to add/update secrets in KeyVault.
-Configuration build using the builder.AddAzureAppConfiguration().build() to access values in AppConfiguration.(using builder.AddAzureAppConfiguration() is a necessity due to its features).
So, basically 3 connections to azure are made here. Is there any way to decrease the no. of connections? Like, using the ConfigurationBuilder to get a ConfigurationClient and/or SecretClient.

Since your application is accessing two different resources, App Configuration and Key Vault, a minimum of two connections are needed. This is due to lack of support for shared connections across different services.
Assuming your application is using ConfigureKeyVault to access Key Vault references, the call to AddAzureAppConfiguration().Build() is actually creating two connections - one to App Configuration and the other to Key Vault. In this case, there are a total of 4 connections. You can reduce it to 3 by registering the SecretClient you created to add/update secrets in Key Vault in the AddAzureAppConfiguration method.
SecretClient secretClient = new SecretClient(new Uri("http://my-keyvault-uri"), new DefaultAzureCredential());
builder.AddAzureAppConfiguration(options =>
{
options.Connect(settings["connection_string"])
.ConfigureKeyVault(kv => kv.Register(secretClient));
});
At this time, there isn't a supported way to provide an existing instance of ConfigurationClient while setting up the AddAzureAppConfiguration method, but this may be supported in the future.

Related

How to regenerate Servicebus queue keys

I need to regenerate the ServiceBus primary and secondary keys on a periodic basis. I am able to do it in the .NET framework, but i need to do it in .NET Core or .NET 6 as it will be an Azure Function with a timer trigger.
I am using Azure.Messaging.ServiceBus, but I cannot find the corresponding methods that are in Microsoft.ServiceBus.Messaging.... in order to generate the keys nor to update the rules.
Can someone please direct me to the documentation or sample code?
thanks
You can regenerate the keys using the method RegenerateKeysAsync which is available in Microsoft.Azure.Management.ServiceBus.Fluent. It regenerates the keys at namespace level.
Alternatively, you can use below code to generate new key
string newPrimaryKey = SharedAccessAuthorizationRule.GenerateRandomKey();
rule.PrimaryKey = newPrimaryKey;
Here is a sample where you can find Azure Service Bus Queues with ASP.NET Core Services.
REFERENCES:
azure-sdk-for-net/ScenarioTests.TopicsTests.CRUDAuthorizationRules.cs
Update azure service bus queue shared access policy programmatically

On-prem ASP.NET Framework web app with Azure Key Vault

We're in the process of trying to secure our application secrets in our internal ASP.NET Framework web applications. The initial plan offered to me was to use Azure Key Vault. I began development work using my Visual Studio Enterprise subscription, and that seems to work fine, locally.
We've created a second Key Vault in our company's production environment, and again, I can use it locally, because my own AAD account has access to the vault. However, in this project (4.7.2 Web Forms web application), I don't see any means of specifying the Access Policy principal that we've created for the application.
My google-fu is failing me: is there any documentation that explains how to do this? Is this scenario -- an on-prem, ASP.NET Framework app outside of the Azure environment, accessing Key Vault for confiugation values -- even possible?
Thanks.
UPDATE: I was unable to find a solution that would allow me to use the Access Policy principal from within the "Add Connected Service" dialog. I'm somewhat surprised it's not in there, or is hidden enough to elude me. So I ended up writing my own Key Vault Secret-Reader function, similar to the marked answer. Hope this helps someone...
In this scenario, your option is to use the service principal to access the keyvault, please follow the steps below, my sample get the secret from the keyvault.
1.Register an application with Azure AD and create a service principal.
2.Get values for signing in and create a new application secret.
3.Navigate to the keyvault in the portal -> Access policies -> add the correct secret permission for the service principal.
4.Then use the code below, replace the <client-id>, <tenant-id>, <client-secret> with the values got before.
using System;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
namespace test1
{
class Program
{
static void Main(string[] args)
{
var azureServiceTokenProvider = new AzureServiceTokenProvider("RunAs=App;AppId=<client-id>;TenantId=<tenant-id>;AppKey=<client-secret>");
var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
var secret = kv.GetSecretAsync("https://keyvaultname.vault.azure.net/", "mySecret123").GetAwaiter().GetResult();
Console.WriteLine(secret);
}
}
}

Web.Config transforms for Multi-Tenant deployment of WebForms app in docker over AWS ECS

Environment
ASP.NET WebForms app over IIS
Docker container host
AWS ECS hosting platform
Each client hosting its own copy of the app with private database connection string
Background
In the non-docker environment, each copy is a virtual directory under IIS, and thus have their own individual web.config pointing to dedicated databases. The underlying codebase is the same for each client, with no client-specific customization involved. The route becomes / here.
In the docker environment (one container per client), each copy goes over as a central root application.
Challange
Since the root image is going to be the same, how to have the web.config overridden for each client deployment.
We shouldn't create multiple images (one per client) as that will mean having extra deployment jobs and losing out on centralization. The connection strings should ideally be stored in some kind of dictionary storage applicable at ECS level which can provide client-specific values upon loading of corresponding containers.
Presenting the approach we used to solve this issue. Hope it may help others struck in similar cases.
With the problem statement tied to having a single root image and having any customization being applied at runtime, we knew that there needs to be a transformation of web.config at time of loading of the corresponding containers.
The solution was to use a PowerShell script that will read the web.config and get replace the specific values which were having a custom prefix embedded to the key. The values got passed from custom environmental variables within ECS and the web.config also got updated to have the keys with the prefix added.
Now since the docker container can have only a single entry point, a new base image was created which instantiated an IIS server and called a PowerShell script as startup. The called script called this transformation script and then set the ServiceMonitor on the w3cwp.
Thanks a lot for this article https://anthonychu.ca/post/overriding-web-config-settings-environment-variables-containerized-aspnet-apps/
I would use environment variables as the OP suggests for this with a start up transform, however I want to make the point that you do not want sensitive information in ENV variables, like DB passwords, in your ECS task definition.
For that protected information, you should use ECS secrets coupled with Parameter Store in Systems Manager. These values can be stored encrypted in the Parameter Store (using a KMS key) and the ECS Agent will 'inject' them as ENV variables on task startup.
For me, to simplify matters, I simply use secrets for everything although you can choose to only encrypt the sensitive information and leave the others clear.
I dynamically add the secrets for the given application into my task definitions at deploy time by looking up the 'secrets' for the given app by 'namespace' (something that Parameter Store supports). Then, if I need to add a new parameter, I can just add a new secret to the store in the given namespace and re-deploy the app. It will pick up and inject into the task definition any newly defined secrets automatically (or remove ones that have been retired).
Sample ruby code for creating task definition:
params = ssm_client.get_parameters_by_path(path: '/production/my_app/').parameters
secrets = params.map{ |p| { name: p.name.split("/")[-1], value_from: p.arn } }
task_def.container_definitions[0].secrets = secrets
This last transform injects the secrets such that the secret 'name' is the ENV variable name... which ends up looking like this:
"secrets": [
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_HOSTNAME",
"name": "DB_HOSTNAME"
},
{
"valueFrom": "arn:aws:ssm:us-east-1:578610029524:parameter/production/my_app/DB_PASSWORD",
"name": "DB_PASSWORD"
}
You can see there are no values now in the task definition. They are retrieved and injected when ECS starts up your task.
More information:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html

Not able to connect to Azure Key Vault when using Service Identity

I am trying to retrieve secrets from Azure Key Vault using Service Identity in an ASPNet 4.6.2 web application. I am using the code as outlined in this article. Locally, things are working fine, though this is because it is using my identity. When I deploy the application to Azure I get an exception when keyVaultClient.GetSecretAsync(keyUrl) is called.
As best as I can tell everything is configured correctly. I created a User assigned identity so it could be reused and made sure that identity had get access to secrets and keys in the KeyVault policy.
The exception is an AzureServiceTokenProviderException. It is verbose and outlines how it tried four methods to authenticate. The information I'm concerned about is when it tries to use Managed Service Identity:
Tried to get token using Managed Service Identity. Access token could
not be acquired. MSI ResponseCode: BadRequest, Response:
I checked application insights and saw that it tried to make the following connection with a 400 result error:
http://127.0.0.1:41340/MSI/token/?resource=https://vault.azure.net&api-version=2017-09-01
There are two things interesting about this:
Why is it trying to connect to a localhost address? This seems wrong.
Could this be getting a 400 back because the resource parameter isn't escaped?
In the MsiAccessTokenProvider source, it only uses that form of an address when the environment variables MSI_ENDPOINT and MSI_SECRET are set. They are not set in application settings, but I can see them in the debug console when I output environment variables.
At this point I don't know what to do. The examples online all make it seem like magic, but if I'm right about the source of the problem then there's some obscure automated setting that needs fixing.
For completeness here is all of my relevant code:
public class ServiceIdentityKeyVaultUtil : IDisposable
{
private readonly AzureServiceTokenProvider azureServiceTokenProvider;
private readonly Uri baseSecretsUri;
private readonly KeyVaultClient keyVaultClient;
public ServiceIdentityKeyVaultUtil(string baseKeyVaultUrl)
{
baseSecretsUri = new Uri(new Uri(baseKeyVaultUrl, UriKind.Absolute), "secrets/");
azureServiceTokenProvider = new AzureServiceTokenProvider();
keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
}
public async Task<string> GetSecretAsync(string key, CancellationToken cancellationToken = new CancellationToken())
{
var keyUrl = new Uri(baseSecretsUri, key).ToString();
try
{
var secret = await keyVaultClient.GetSecretAsync(keyUrl, cancellationToken);
return secret.Value;
}
catch (Exception ex)
{
/** rethrows error with extra details */
}
}
/** IDisposable support */
}
UPDATE #2 (I erased update #1)
I created a completely new app or a new service instance and was able to recreate the error. However, in all instances I was using a User Assigned Identity. If I remove that and use a System Assigned Identity then it works just fine.
I don't know why these would be any different. Anybody have an insight as I would prefer the user assigned one.
One of the key differences of a user assigned identity is that you can assign it to multiple services. It exists as a separate asset in azure whereas a system identity is bound to the life cycle of the service to which it is paired.
From the docs:
A system-assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that's trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it's enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.
A user-assigned managed identity is created as a standalone Azure resource. Through a create process, Azure creates an identity in the Azure AD tenant that's trusted by the subscription in use. After the identity is created, the identity can be assigned to one or more Azure service instances. The lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure service instances to which it's assigned.
User assigned identities are still in preview for App Services. See the documentation here. It may still be in private preview (i.e. Microsoft has to explicitly enable it on your subscription), it may not be available in the region you have selected, or it could be a defect.
To use a user-assigned identity, the HTTP call to get a token must include the identity's id.
Otherwise it will attempt to use a system-assigned identity.
Why is it trying to connect to a localhost address? This seems wrong.
Because the MSI endpoint is local to App Service, only accessible from within the instance.
Could this be getting a 400 back because the resource parameter isn't escaped?
Yes, but I don't think that was the reason here.
In the MsiAccessTokenProvider source, it only uses that form of an address when the environment variables MSI_ENDPOINT and MSI_SECRET are set. They are not set in application settings, but I can see them in the debug console when I output environment variables.
These are added by App Service invisibly, not added to app settings.
As for how to use the user-assigned identity,
I couldn't see a way to do that with the AppAuthentication library.
You could make the HTTP call manually in Azure: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http.
Then you gotta take care of caching yourself though!
Managed identity endpoints can't handle a lot of queries at one time :)

AppFabric 1.1 How Many DataCacheServerEndpoint's Should Client Connect To?

The AppFabric 1.1 client documentation discusses assigning a list of DataCachServer endpoints to the DataCacheFactoryConfiguration. Most of the examples show the list consisting of a single or perhaps two different cache servers. If the cluster consists of n servers should the client register each of the servers? Does it matter what order the servers are registered in? For example, if I have 50 servers in my web tier, and 5 servers in my cache tier, do each of the 50 web servers register all 5 caching servers? Here is sample code:
// Declare array for cache host(s).
DataCacheServerEndpoint[] servers = new DataCacheServerEndpoint[5];
servers[0] = new DataCacheServerEndpoint("Cache01", 22233);
servers[1] = new DataCacheServerEndpoint("Cache02", 22233);
servers[2] = new DataCacheServerEndpoint("Cache03", 22233);
servers[3] = new DataCacheServerEndpoint("Cache04", 22233);
servers[4] = new DataCacheServerEndpoint("Cache05", 22233);
// Setup the DataCacheFactory configuration.
DataCacheFactoryConfiguration factoryConfig = new DataCacheFactoryConfiguration();
factoryConfig.Servers = servers;
// Create a configured DataCacheFactory object.
DataCacheFactory mycacheFactory = new DataCacheFactory(factoryConfig);
// Get a cache client for the cache named "default".
DataCache myDefaultCache = mycacheFactory.GetCache("default");
Can each web server register identically, and will the load be balanced across the caching tier? If a registered server becomes unavailable is the next one tried in sequence, or is it randomized? Links to supporting documentation would be helpful.
Related to load balancing, Jason Roth wrote the following [is there other documentation available]?
App fabric client is smart client and it can directly contact the server which ever server has your data. The application need not worry about load balancing. This is done using the routing client.
Based on some testing, and letting Jason Roth's comment sink in, I think the DataCacheServerEndPoint is used by the "smart client" to retrieve the list of cache cluster members when the GetCache method is called on the DataCacheFactory. The DataCache object is the thing that is smart--and it is smart in the sense that if the server used in the DataCacheServerEndpoint instantiation goes offline or otherwise becomes unavailable, the smart client still has access to the other cluster members. Therefore the purpose of a list of more than one DataCacheServerEndpoint is to provide redundancy when calling the GetCache method.
The advice is that the DataCache object should follow a singleton pattern, and not be instantiated on each request for data from the cache. Which is why there is no need to loadbalance or provide a VIP for the individual DataCacheServerEndpoints.
Instantiate as many DataCacheServerEndPoints as needed to ensure at least one is up at all times--there is no need to add every member of the cache cluster unless that is the only way to ensure at least one is up.
When it comes to administering boxes in the cache cluster (for instance, applying monthly patches), consider minimizing the cache thrashing and rebalancing by administering a single box at a time, rather than attempting to administer groups of boxes in "waves".

Resources