Custom metrics confusion - stackdriver

I added https://micrometer.io to our staging server in google cloud. The metric does not show up in "Cloud Run Revision" resource types. It is only visible if I select "Global" as seen here...
The instructions were very simple and very clear (MUCH UNLIKE opencensus which has a way overdesigned api). In fact, unlike opencensus, it worked out of the box except for it is not recording into "Cloud Run Revision".
I can't even choose the service_name in the filter so once I deploy to production, the metric will be recording BOTH prod and staging which is not what we want.
How do I debug micrometer further
If anyone knows offhand as well what the issue might be, that would be great as well? (though I don't mind learning micrometer and debugging it a bit more).

For now the only available monitored-resource types in your custom metrics are:
aws_ec2_instance: Amazon EC2 instance.
dataflow_job: Dataflow job.
gce_instance: Compute Engine instance.
gke_container: GKE container instance.
generic_node: User-specified computing node.
generic_task: User-defined task.
global: Use this resource when no other resource type is suitable. For most use cases, generic_node or generic_task are better choices than global.
k8s_cluster: Kubernetes cluster.
k8s_container: Kubernetes container.
k8s_node: Kubernetes node.
k8s_pod: Kubernetes pod.
So, global is the correct monitored-resource type in this case, since there is not a Cloud Run monitored-resource type yet.
To identify better the metrics, you can create metric descriptors, either Auto-creation or manually

For completeness, I have it recording all the JVM stats now but have a new post on aggregation in google's website here that seems to be a new issue...
Google Cloud Metrics and MicroMeter JVM reporting (is this a Micrometer bug or?)
My code that did the trick was (and using revisionName is CRITICALL for not getting errors!!!)
String projectId = MetadataConfig.getProjectId();
String service = System.getenv("K_SERVICE");
String revisionName = System.getenv("K_REVISION");
String config = System.getenv("K_CONFIGURATION");
String zone = MetadataConfig.getZone();
Map<String, String> map = new HashMap<>();
map.put("namespace", service);
map.put("job", "nothing");
map.put("task_id", revisionName);
map.put("location", zone);
log.info("project="+projectId+" svc="+service+" r="+revisionName+" config="+config+" zone="+zone);
StackdriverConfig stackdriverConfig = new OurGoogleConfig(projectId, map);
//figure out how to put in template better
MeterRegistry googleRegistry = StackdriverMeterRegistry.builder(stackdriverConfig).build();
Metrics.addRegistry(googleRegistry);
//This is what would be used in Development Server
//Metrics.addRegistry(new SimpleMeterRegistry());
//How to expose on #backend perhaps at /#metrics
CompositeMeterRegistry registry = Metrics.globalRegistry;
new ClassLoaderMetrics().bindTo(registry);
new JvmMemoryMetrics().bindTo(registry);
new JvmGcMetrics().bindTo(registry);
new ProcessorMetrics().bindTo(registry);
new JvmThreadMetrics().bindTo(registry);
and then the config is simple...
private static class OurGoogleConfig implements StackdriverConfig {
private String projectId;
private Map<String, String> resourceLabels;
public OurGoogleConfig(String projectId, Map<String, String> resourceLabels) {
this.projectId = projectId;
this.resourceLabels = resourceLabels;
}
#Override
public String projectId() {
return projectId;
}
#Override
public String get(String key) {
return null;
}
#Override
public String resourceType() {
return "generic_task";
}
#Override
public Map<String, String> resourceLabels() {
//they call this EVERY time, so save on memory by only passing the same
//map every time instead of re-creating it...
return resourceLabels;
}
};

Related

Unity to DryIoC conversion ParameterOverride

We are transitioning from Xamarin.Forms to .Net MAUI but our project uses Prism.Unity.Forms. We have a lot of code that basically uses the IContainer.Resolve() passing in a collection of ParameterOverrides with some primitives but some are interfaces/objects. The T we are resolving is usually a registered View which may or may not be the correct way of doing this but it's what I'm working with and we are doing it in backend code (sometimes a service). What is the correct way of doing this Unity thing in DryIoC? Note these parameters are being set at runtime and may only be part of the parameters a constructor takes in (some may be from already registered dependencies).
Example of the scenario:
//Called from service into custom resolver method
var parameterOverrides = new[]
{
new ParameterOverride("productID", 8675309),
new ParameterOverride("objectWithData", IObjectWithData)
};
//Custom resolver method example
var resolverOverrides = new List<ResolverOverride>();
foreach(var parameterOverride in parameterOverrides)
{
resolverOverrides.Add(parameterOverride);
}
return _container.Resolve<T>(resolverOverrides.ToArray());
You've found out why you don't use the container outside of the resolution root. I recommend not trying to replicate this error with another container but rather fixing it - use handcoded factories:
internal class SomeFactory : IProductViewFactory
{
public SomeFactory( IService dependency )
{
_dependency = dependency ?? throw new ArgumentNullException( nameof(dependency) );
}
#region IProductViewFactory
public IProductView Create( int productID, IObjectWithData objectWithData ) => new SomeProduct( productID, objectWithData, _dependency );
#endregion
#region private
private readonly IService _dependency;
#endregion
}
See this, too:
For dependencies that are independent of the instance you're creating, inject them into the factory and store them until needed.
For dependencies that are independent of the context of creation but need to be recreated for each created instance, inject factories into the factory and store them.
For dependencies that are dependent on the context of creation, pass them into the Create method of the factory.
Also, be aware of potential subtle differences in container behaviours: Unity's ResolverOverride works for the whole call to resolve, i.e. they override parameters of dependencies, too, whatever happens to match by name. This could very well be handled very differently by DryIOC.
First, I would agree with the #haukinger answer to rethink how do you pass the runtime information into the services. The most transparent and simple way in my opinion is by passing it via parameters into the consuming methods.
Second, here is a complete example in DryIoc to solve it head-on + the live code to play with.
using System;
using DryIoc;
public class Program
{
record ParameterOverride(string Name, object Value);
record Product(int productID);
public static void Main()
{
// get container somehow,
// if you don't have an access to it directly then you may resolve it from your service provider
IContainer c = new Container();
c.Register<Product>();
var parameterOverrides = new[]
{
new ParameterOverride("productID", 8675309),
new ParameterOverride("objectWithData", "blah"),
};
var parameterRules = Parameters.Of;
foreach (var po in parameterOverrides)
{
parameterRules = parameterRules.Details((_, x) => x.Name.Equals(po.Name) ? ServiceDetails.Of(po.Value) : null);
}
c = c.With(rules => rules.With(parameters: parameterRules));
var s = c.Resolve<Product>();
Console.WriteLine(s.productID);
}
}

From consumer end, Is there an option to create a topic with custom configurations?

I'm writing a kafka consumer using 'org.springframework.kafka.annotation.KafkaListener' (#KafkaListener) annotation. This annotation is expecting the topic to be already at the time of subscribing and trying to create the topic if the topic is not present.
In my case, i don't want the consumer to create a topic with default configuration but it should create a topic with custom configurations (like the no of partitions, clean up policy etc). Is there any option for this in spring-kafka?
See the documentation configuring topics.
If you define a KafkaAdmin bean in your application context, it can automatically add topics to the broker. To do so, you can add a NewTopic #Bean for each topic to the application context. The following example shows how to do so:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(embeddedKafka().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#Bean
public NewTopic topic1() {
return new NewTopic("thing1", 10, (short) 2);
}
#Bean
public NewTopic topic2() {
return new NewTopic("thing2", 10, (short) 2);
}
By default, if the broker is not available, a message is logged, but the context continues to load. You can programmatically invoke the admin’s initialize() method to try again later. If you wish this condition to be considered fatal, set the admin’s fatalIfBrokerNotAvailable property to true. The context then fails to initialize.
If the broker supports it (1.0.0 or higher), the admin increases the number of partitions if it is found that an existing topic has fewer partitions than the NewTopic.numPartitions.
If you are using Spring Boot, you don't need an admin bean because boot will automatically configure one for you.

Building JVM for Scala 2_11

This is a question about the project https://github.com/DiUS/pact-jvm.
Problem
When I am validating pacts I need to be able to use client side authentication, as the providers actually require client side authentication. I'll prefix what I am saying with a declaration that I am not very familiar with groovy: I mostly program in scala, java or javascript. Having looked at the code I think that client side authentication is not currently supported, so I'd like to make a pull request with that support in it.
What I've done so far
I have managed to get Https working with a truststore: I copied the HttpTarget and created a HttpsTarget, and in the HttpsTarget specified the truststore in the providerinfo. Unfortunately looking at the code there doesn't seem to be a way of specifying the client certificate, so I need to change the providerinfo class to be able to specify where it is (in the same way the the truststore is provided).
My problem is that I've got the code compiling using the advice in the 'for contributors', but when I publish locally, I am only publishing for scala version 2_12. Because of version issues and binary incompatibilities between scala versions, I need to publish to scala 2_11. My skills with gradle are even less than my skills with groovy. I've done a search for all the references to scalaVersion, and found that there is quite a lot of logic around it, but I've not managed to track down where it is specified.
Question
If I can use client side authentication with the current pact validator could you let me know. If not, could you tell me how to publish the project with support for scala 2_11?
Thanks
In the end I made my own Http Target. My need is to run from Junit, not the general case, and this is good enough:
public class HttpsTarget extends HttpTarget {
public HttpsTarget(final int port) {
super("https", "localhost", port, "/", false);
}
static class HttpsClientFactory implements IHttpClientFactory {
#NotNull
#Override
public CloseableHttpClient newClient(Object o) {
SSLContext sslContext = // put here code to make ssl context
CloseableHttpClient httpClient = HttpClients
.custom()
.setSSLContext(sslContext)
.build();
return httpClient;
}
}
#Override
public void testInteraction(final String consumerName, final Interaction interaction, PactSource source) {
ProviderInfo provider = getProviderInfo(source);
ConsumerInfo consumer = new ConsumerInfo(consumerName);
ProviderVerifier verifier = setupVerifier(interaction, provider, consumer);
Map<String, Object> failures = new HashMap<>();
ProviderClient client = new ProviderClient(provider, new HttpsClientFactory());
verifier.verifyResponseFromProvider(provider, interaction, interaction.getDescription(), failures, client);
reportTestResult(failures.isEmpty(), verifier);
try {
if (!failures.isEmpty()) {
verifier.displayFailures(failures);
throw getAssertionError(failures);
}
} finally {
verifier.finialiseReports();
}
}
}

Can asp.net core policies and claims handle resource/activity based authorization?

I'm looking into asp.net core and the new security policies and claims functionality. Having just looked at it I don't see how it is much better than the existing authorize attribute logic in the past where hard-coded roles or users are decorated on controllers, methods etc. To me the issues has just been moved from hard-coding in attributes to hard-coding policies.
Ideally I would like to perform activity/resource based authorization where everything would be database driven. Each activity or resource would be stored in the database and a permission/role would be assigned to the resource.
While researching the topic I found this fantastic article by Stefan Wloch that pretty much covers exactly what I'm looking to do.
http://www.codeproject.com/Articles/1079552/Custom-Roles-Based-Access-Control-RBAC-in-ASP-NE
So my question is with the new core features how does it prevent us from having to hard-code and recompile when the time comes to change what roles/permissions are allowed to access a controller or method in a controller? I understand how claims can be used to store anything but the policy portion seems susceptible to change, which gets us back to square one. Don't get me wrong, loving asp.net core and all the great changes, just looking for more information on how to handle authorization.
There are at least 2 things that need to be consider in implementing what you want. The first one is how to model the Controller-Action access in database, the second one is to apply that setting in asp.net core Identity.
The first one, there are too many possibilities depend on the application itself, so lets create a Service interface named IActivityAccessService that encapsulate. We use that service via dependency injection so that anything that we need can be injected to it.
As for the second one, it can be achieved by customize AuthorizationHandler in a policy-based authorization. The first step is to setup things in Startup.ConfigureServices :
services.AddAuthorization(options =>
{
options.AddPolicy("ActivityAccess", policy => policy.Requirements.Add( new ActivityAccessRequirement() ));
});
services.AddScoped<IAuthorizationHandler, ActivityAccessHandler>();
//inject the service also
services.AddScoped<IActivityAccessService, ActivityAccessService>();
//code below will be explained later
services.AddHttpContextAccessor();
next we create the ActivityAccessHandler:
public class ActivityAccessHandler : AuthorizationHandler<ActivityAccessRequirement>
{
readonly IActivityAccessService _ActivityAccessService;
public ActivityAccessHandler (IActivityAccessService r)
{
_ActivityAccessService = r;
}
protected override async Task HandleRequirementAsync(AuthorizationHandlerContext authHandlerContext, ActivityAccessRequirement requirement)
{
if (context.Resource is AuthorizationFilterContext filterContext)
{
var area = (filterContext.RouteData.Values["area"] as string)?.ToLower();
var controller = (filterContext.RouteData.Values["controller"] as string)?.ToLower();
var action = (filterContext.RouteData.Values["action"] as string)?.ToLower();
var id = (filterContext.RouteData.Values["id"] as string)?.ToLower();
if (_ActivityAccessService.IsAuthorize(area, controller, action, id))
{
context.Succeed(requirement);
}
}
}
}
public class ActivityAccessRequirement : IAuthorizationRequirement
{
//since we handle the authorization in our service, we can leave this empty
}
Since we can use dependency injection in AuthorizationHandler, it is here that we inject the IActivityAccessService.
Now that we have access to what resource is being requested, we need to know who is requesting it. This can be done by injecting IHttpContextAccessor. Thus services.AddHttpContextAccessor() is added in code above, it is for this reason.
And for the IActivityAccessService, you could do something like:
public class ActivityAccessService : IActivityAccessService
{
readonly AppDbContext _context;
readonly IConfiguration _config;
readonly IHttpContextAccessor _accessor;
readonly UserManager<AppUser> _userManager;
public class ActivityAccessService(AppDbContext d, IConfiguration c, IHttpContextAccessor a, UserManager<AppUser> u)
{
_context = d;
_config = c;
_accessor = a;
_userManager = u;
}
public bool IsAuthorize(string area, string controller, string action, string id)
{
//get the user object from the ClaimPrincipals
var appUser = await _userManager.GetUserAsync(_accessor.HttpContext.User);
//get user roles if necessary
var userRoles = await _userManager.GetRolesAsync(appUser);
// all of needed data are available now, do the logic of authorization
return result;
}
}
Please note that the code in IsAuthorize body above is an example. While it will works, people might say it's not a good practice. But since IActivityAccessService is just a common simple service class, we can inject anything that wee need to it and modify the IsAuthorize method signature in any way that we want to. For example, we can just pass the filterContext.RouteData instead.
As for how to apply this to a controller or action:
[Authorize(Policy = "ActivityAccess")]
public ActionResult<IActionResult> GetResource(int resourceId)
{
return Resource;
}
hope this helps

Azure Table Storage best practice for ASP.NET MVC/WebApi

What are the best practices for connecting to a Azure Table Storage from a ASP.NET MVC or Web API app?
Right now I've made a StorageContext class which holds a reference to the CloudStorageAccount and CloudTableClient, like this:
public class StorageContext
{
private static CloudStorageAccount _storageAccount;
private static CloudTableClient _tableClient;
public StorageContext() : this("StorageConnectionString") { }
public StorageContext(string connectionString)
{
if (_storageAccount == null)
_storageAccount = CloudStorageAccount.Parse(ConfigurationManager.ConnectionStrings[connectionString].ConnectionString);
if (_tableClient == null)
_tableClient = _storageAccount.CreateCloudTableClient();
}
public CloudTable Table(string tableName)
{
var table = _tableClient.GetTableReference(tableName);
table.CreateIfNotExists();
return table;
}
}
And my controller I'm using it like this:
public class HomeController : ApiController
{
private StorageContext db;
public HomeController() : this(new StorageContext()) { }
public HomeController(StorageContext context)
{
this.db = context;
}
public IHttpActionResult Get()
{
var table = db.Table("users");
var results = (from user in table.CreateQuery<User>()
select user).Take(10).ToList();
return Ok<List<User>>(results);
}
}
Is this the preferred way of doing it?
The API is going to be used on a high traffic site with > 1000 req/sec.
I also need unit tests. Using it like above it I can pass in another connString name and instead connect to the Azure Storage emulator in my unit tests.
Am I on the right track or are there better ways to connect?
Actually your question
What are the best practices for connecting to a Azure Table Storage
from a ASP.NET MVC or Web API app?
could be restated like "What are the best practices to use data access layer in web application". It is the same.
You can find a lot of answers about data access layer best practices. But iron rule here keep your data access layer separated from your controller or presentation. The best way to use it through Model in scope of MVC pattern, or you can think about Repository and/or Unit of work pattern if you like them.
In your example your data access logic is already wrapped in StorageContext, which is fine, I would additionally extract interface and use DI/IoC and dependency resolver for it. That's all when speaking about your code snippet. You are on right way.

Resources