I'm running the codeless version of Application Insights in a Windows Server 2016 Azure VM. With the SDK I know it is possible to, for example, add custom telemetry so that I can update the cloudRoleName value that appears in my metrics.
My problem is that for the Performance Counters that are pushed by Application Insights it only provides a value like w3wp#1 for process related data, but I really want to be able to relate this process to an application pool (ideally to a cloudRoleName)
Can I add any configuration to the App Insights agent that will allow me to add custom telemetry or will I have to add the SDK to each of the Dotnet Applications that are running on this VM to achieve this?
If I understand you correctly, you want to provide a custom value for cloudRoleName, right?
If that's the case, the only way is to use code(no way for codeless, see this issue.) by using ITelemetryInitializer, here is an example:
public class CloudRoleNameTelemetryInitializer : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
// set custom role name here
telemetry.Context.Cloud.RoleName = "Custom RoleName";
}
}
For more details, you can refer to this article.
Related
We're in the process of trying to secure our application secrets in our internal ASP.NET Framework web applications. The initial plan offered to me was to use Azure Key Vault. I began development work using my Visual Studio Enterprise subscription, and that seems to work fine, locally.
We've created a second Key Vault in our company's production environment, and again, I can use it locally, because my own AAD account has access to the vault. However, in this project (4.7.2 Web Forms web application), I don't see any means of specifying the Access Policy principal that we've created for the application.
My google-fu is failing me: is there any documentation that explains how to do this? Is this scenario -- an on-prem, ASP.NET Framework app outside of the Azure environment, accessing Key Vault for confiugation values -- even possible?
Thanks.
UPDATE: I was unable to find a solution that would allow me to use the Access Policy principal from within the "Add Connected Service" dialog. I'm somewhat surprised it's not in there, or is hidden enough to elude me. So I ended up writing my own Key Vault Secret-Reader function, similar to the marked answer. Hope this helps someone...
In this scenario, your option is to use the service principal to access the keyvault, please follow the steps below, my sample get the secret from the keyvault.
1.Register an application with Azure AD and create a service principal.
2.Get values for signing in and create a new application secret.
3.Navigate to the keyvault in the portal -> Access policies -> add the correct secret permission for the service principal.
4.Then use the code below, replace the <client-id>, <tenant-id>, <client-secret> with the values got before.
using System;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
namespace test1
{
class Program
{
static void Main(string[] args)
{
var azureServiceTokenProvider = new AzureServiceTokenProvider("RunAs=App;AppId=<client-id>;TenantId=<tenant-id>;AppKey=<client-secret>");
var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
var secret = kv.GetSecretAsync("https://keyvaultname.vault.azure.net/", "mySecret123").GetAwaiter().GetResult();
Console.WriteLine(secret);
}
}
}
I am trying to retrieve secrets from Azure Key Vault using Service Identity in an ASPNet 4.6.2 web application. I am using the code as outlined in this article. Locally, things are working fine, though this is because it is using my identity. When I deploy the application to Azure I get an exception when keyVaultClient.GetSecretAsync(keyUrl) is called.
As best as I can tell everything is configured correctly. I created a User assigned identity so it could be reused and made sure that identity had get access to secrets and keys in the KeyVault policy.
The exception is an AzureServiceTokenProviderException. It is verbose and outlines how it tried four methods to authenticate. The information I'm concerned about is when it tries to use Managed Service Identity:
Tried to get token using Managed Service Identity. Access token could
not be acquired. MSI ResponseCode: BadRequest, Response:
I checked application insights and saw that it tried to make the following connection with a 400 result error:
http://127.0.0.1:41340/MSI/token/?resource=https://vault.azure.net&api-version=2017-09-01
There are two things interesting about this:
Why is it trying to connect to a localhost address? This seems wrong.
Could this be getting a 400 back because the resource parameter isn't escaped?
In the MsiAccessTokenProvider source, it only uses that form of an address when the environment variables MSI_ENDPOINT and MSI_SECRET are set. They are not set in application settings, but I can see them in the debug console when I output environment variables.
At this point I don't know what to do. The examples online all make it seem like magic, but if I'm right about the source of the problem then there's some obscure automated setting that needs fixing.
For completeness here is all of my relevant code:
public class ServiceIdentityKeyVaultUtil : IDisposable
{
private readonly AzureServiceTokenProvider azureServiceTokenProvider;
private readonly Uri baseSecretsUri;
private readonly KeyVaultClient keyVaultClient;
public ServiceIdentityKeyVaultUtil(string baseKeyVaultUrl)
{
baseSecretsUri = new Uri(new Uri(baseKeyVaultUrl, UriKind.Absolute), "secrets/");
azureServiceTokenProvider = new AzureServiceTokenProvider();
keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
}
public async Task<string> GetSecretAsync(string key, CancellationToken cancellationToken = new CancellationToken())
{
var keyUrl = new Uri(baseSecretsUri, key).ToString();
try
{
var secret = await keyVaultClient.GetSecretAsync(keyUrl, cancellationToken);
return secret.Value;
}
catch (Exception ex)
{
/** rethrows error with extra details */
}
}
/** IDisposable support */
}
UPDATE #2 (I erased update #1)
I created a completely new app or a new service instance and was able to recreate the error. However, in all instances I was using a User Assigned Identity. If I remove that and use a System Assigned Identity then it works just fine.
I don't know why these would be any different. Anybody have an insight as I would prefer the user assigned one.
One of the key differences of a user assigned identity is that you can assign it to multiple services. It exists as a separate asset in azure whereas a system identity is bound to the life cycle of the service to which it is paired.
From the docs:
A system-assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that's trusted by the subscription of the instance. After the identity is created, the credentials are provisioned onto the instance. The lifecycle of a system-assigned identity is directly tied to the Azure service instance that it's enabled on. If the instance is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.
A user-assigned managed identity is created as a standalone Azure resource. Through a create process, Azure creates an identity in the Azure AD tenant that's trusted by the subscription in use. After the identity is created, the identity can be assigned to one or more Azure service instances. The lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure service instances to which it's assigned.
User assigned identities are still in preview for App Services. See the documentation here. It may still be in private preview (i.e. Microsoft has to explicitly enable it on your subscription), it may not be available in the region you have selected, or it could be a defect.
To use a user-assigned identity, the HTTP call to get a token must include the identity's id.
Otherwise it will attempt to use a system-assigned identity.
Why is it trying to connect to a localhost address? This seems wrong.
Because the MSI endpoint is local to App Service, only accessible from within the instance.
Could this be getting a 400 back because the resource parameter isn't escaped?
Yes, but I don't think that was the reason here.
In the MsiAccessTokenProvider source, it only uses that form of an address when the environment variables MSI_ENDPOINT and MSI_SECRET are set. They are not set in application settings, but I can see them in the debug console when I output environment variables.
These are added by App Service invisibly, not added to app settings.
As for how to use the user-assigned identity,
I couldn't see a way to do that with the AppAuthentication library.
You could make the HTTP call manually in Azure: https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-http.
Then you gotta take care of caching yourself though!
Managed identity endpoints can't handle a lot of queries at one time :)
I have used App insights directly for application logging before and I have seen that .Net core platform also creates trace events that goes to App insights.
In a new .Net core API application, I'd like to use Serilog for application logging and App Insight for storing and visualizing the log events. I'd like to know:
How to continue to get the .Net core .created trace events to App insights?
How can I pass correlation Id from my application to .Net core created trace events?
Will end to end transaction feature in App insight portal show all the events together? It is important for me to know and keep an eye on the latency of SQL calls.
Simply using Serilog.Sinks.ApplicationInsights is not enough, as it will not correlate Serilog events with the rest of your telemetry on Application Insights.
To correlate the events, so they are shown as one "End-to-End transaction" - you have to do the following things:
Create a Serilog enricher that would record the current Activity id as a ScalarValue in a LogEventProperty - see OperationIdEnricher
[Optional] Create an extension for this enricher - see LoggingExtensions
Register the enricher / add it to the pipeline via code or config - see logging.json
Create a custom TelemetryConverter (subclass from TraceTelemetryConverter or EventTelemetryConverter) for ApplicationInsights that would set telemetry.Context.Operation.Id from value set in 1) - see OperationTelemetryConverter
Check out my blog post "Serilog with ApplicationInsights" that explains the points above with more details, and links
Also, be sure to take a look at Telemetry correlation in Application Insights on MSDN
If you are using ILogger in .Net core for logging purposes, those message can be directed to Application Insights with the following modification of startup.cs:
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
/*...existing code..*/
loggerFactory.AddApplicationInsights(app.ApplicationServices, LogLevel.Warning);
}
If you employ your own correlation ID, you can modify Application Insights correlation IDs accordingly in Context.Operation field of telemetry item with your own Telemetry Initializer or pass those values in the respective headers (Request-ID (global ID) and Correlation-Context (name-value pairs)) in the requests to this app - AI will pick up correlation IDs from those.
The end to end transaction is supposed to be displayed together (Requests/Dependencies and Exceptions) on a timeline in the details view of Application Insights telemetry. With you own correlation IDs it should work as well if they are in there from the very beginning of transaction (e.g. in the first component) - otherwise injecting them in the middle will break the chain.
I'm running into an issue integrating Spring Security with my Elastic Beanstalk app backed by a MySql database. If I deploy my app I'm able to login in correctly for some time but eventually I'll start to receive login errors without an exception being thrown so I'm unable to get any useful information about the issue. I've downloaded the logs as well and can't see anything of value. I can see where the logs show accessing the public page, attempting to access the private section, returning the login page, and then the loginError page; however, nothing about any issue.
Even though I'm unable to login through a browser I am able to login if I run the app from an IDE as well as view the db in MySQL Workbench. This suggests to me the problem is due to some persistent state on the server.
I've had a similar problem before with another Beanstalk app using Spring Security and was able to resolve it by setting application properties as follows:
spring.datasource.test-on-borrow=true
spring.datasource.validation-query=SELECT 1
I'm using a more recent version of Spring than that app and the properties have been changed to specific datasources so I tried adding the following properties:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.validation-query=SELECT 1
When that didn't work I added another based on an answer to a similar question here; now the properties are:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.test-while-idle=true
spring.datasource.tomcat.validation-query=SELECT 1
That seemed to work (possibly due to less login activity) but eventually resulted in the same behavior .
I've looked into the various properties available but before I spend a lot of time randomly setting and/or overriding default settings I wanted to see if there's a reliable way to deal with this.
How can I configure my datasource to avoid login errors after long periods of time?
This isn't a problem of specific configuration values but with where those configurations reside. The default location for the application.properties (/resources; Intellij) is fine for deploying as a jar with an embedded Tomcat server but not as a war with a provided server. The file isn't found/used so no changes to the file affect the one given by AWS.
There are a number of ways to handle this; I chose to add an RDS configuration bean in my SpringBootServletInitializer:
#Bean
public RdsInstanceConfigurer instanceConfigurer() {
return () -> {
TomcatJdbcDataSourceFactory dataSourceFactory =
new TomcatJdbcDataSourceFactory();
// Abondoned connections...
dataSourceFactory.setRemoveAbandonedTimeout(60);
dataSourceFactory.setRemoveAbandoned(true);
dataSourceFactory.setLogAbandoned(true);
// Tests
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setTestOnReturn(false);
dataSourceFactory.setTestWhileIdle(false);
// Validations
dataSourceFactory.setValidationInterval(30000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(30000);
dataSourceFactory.setValidationQuery("SELECT 1");
return dataSourceFactory;
};
}
Below are the settings that worked for me.
From Connection to Db dies after >4<24 in spring-boot jpa hibernate
dataSourceFactory.setMaxActive(10);
dataSourceFactory.setInitialSize(10);
dataSourceFactory.setMaxIdle(10);
dataSourceFactory.setMinIdle(1);
dataSourceFactory.setTestWhileIdle(true);
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setValidationQuery("SELECT 1 FROM DUAL");
dataSourceFactory.setValidationInterval(10000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(20000);
dataSourceFactory.setMinEvictableIdleTimeMillis(60000);
Anyone knows how can i start to develop a multitenant site in MVC2, in a way it run on Windows Azure?
I search a lot about this question, and i allways find theoric explanations, everybody says it can be easily done, but i dont find any sample...
Can someone explain me where to start?
Thanks,
João
It depends on how you plan on implementing multitenancy (eg. using authorization with common urls, subdomains, custom domains, or any combination). But you should be able to do just about any approach with Azure and MVC2. If you plan on using a custom domain for each tenant, versus a subdomain, you will need to be happy with using CNAME entries (not A records) to point each custom domain to Azure but that usually is not a problem.
MVC offers many extension points where you can implement multitenancy in its various flavors. The main goal is to uniquely identify the user by either a login or the url.
We have an MVC2 application running in Azure that parses the request url to differentiate the tenant. There are many ways to do this. We took the approach of extending the Controller class to provide our app with the unique tenant information so we could use it as needed to make appropriate repository calls to display the proper views etc.
Here is a sample of what a MultiTenant Controller might look like:
public class MultiTenantController : Controller {
public string TenantCode { get; set; }
protected override void OnActionExecuting(ActionExecutingContext filterContext) {
TenantCode = GetTenantCode(filterContext.HttpContext.Request);
}
private string GetTenantCode(System.Web.HttpRequestBase request) {
string host = new RequestParser(request.Url.AbsoluteUri).Host;
return _tenantService.GetTenantCodeByHostAddress(host);
}
}
NOTES:
The RequestParser function
above is just any implementation
that knows how to parse urls in a
safe manner.
_tenantService
can access some kind of persistent
store (Azure Tables in our case) to
get the TenantCode from the host
address in the url.
All of your controllers would inherit from the above class. Then, to differentiate between tenants you just refer to the TenantCode within your controller like so:
public class HomeController : MultiTenantController {
...
public ViewResult Index() {
var vm = _homeService.GetHomePageViewModelForTenant(TenantCode);
return View(vm);
}
}
Using the above implementation you could serve different sites or data to urls like the following:
http://subtenant1.yourdomain.com
http://subtenant2.yourdomain.com
http://www.customtenantdomain.com
Your backend store (eg. Table Storage) just needs to cross reference host names with the tenant like the table below. In the code above GetTenantCode would access the data.
HostName TenantCode
---------------------- --------------
subtenant1 Tenant1ID
subtenant2 Tenant2ID
www.customtenantdomain Tenant3ID
For www.customtenantdomain.com to work, the tenant needs a CNAME entry for www in their DNS records for customtenantdomain.com that points to your Azure Web Role's address.
Its hugely complex and not something to be taken on lightly. However take a look at the source code for Microsoft's Orchard project. This has full multi-tenancy capabilities if thats what you need: http://orchard.codeplex.com/
And they have a build that works in Azure too.
In this guide we cover aspects of this and it includes a full sample using MVC 2.
link text
First , all answers are very very helpful.It's changing your decision what you want setting up your multitenancy.I mean the most important thing is Identifying all tenant in your app so there is a lot of way for solution.For example you can hold your tenant via subdomains or URL surfing.And also maybe you can store your data multitenat database.
There are very very helpul posts are written by Steve Morgan.
I only help you for set startup multi- tenancy.Here are the blogs :
Identifying the Tenant in Multi-Tenant Azure Applications - Part 1
Identifying the Tenant in Multi-Tenant Azure Applications - Part 2
Identifying the Tenant in Multi-Tenant Azure Applications - Part 3
And here are the Multi-Tenant Data Strategies for Windows Azure :
Multi-Tenant Data Strategies for Windows Azure – Part 1
Multi-Tenant Data Strategies for Windows Azure – Part 2