Disable Only HealthCheck Related Logging - .net-core

I'm trying to figure out a way to disable any health-check related ILogger logging. I am aware of LogLevel filtering, but that will not work here.
As an example, I have a healthcheck that makes an outbound call to a RabbitMQ metrics API. This results in an outbound http call with every inbound call to /health. In general, I want to log all outbound calls made using the HttpClient, but that log is now full of this particular log entry:
[2021.06.15 13:57:04] info: System.Net.Http.HttpClient.Default.LogicalHandler[101] => ConnectionId:0HM9FV5PFFL5K => RequestPath:/health RequestId:0HM9FV5PFFL5K:00000001, SpanId:|6726c52-4217ec92de4df5fb., TraceId:6726c52-4217ec92de4df5fb, ParentId: => Microsoft.Extensions.Diagnostics.HealthChecks.HealthCheckLogScope => HTTP GET http://rabbitmq/api/queues/MyVHost/MyQueue?msg_rates_age=60&msg_rates_incr=60
: End processing HTTP request after 4.6355ms - OK
So, I could apply a warning filter to the HttpClient/LogicalHandler to remove those entries, but then I'd be removing all the info logs of other outbound Http requests, which I don't want.
So, basically, I need a smarter filter that can look at the scopes (or even the text in certain cases), and can filter out based on "Microsoft.Extensions.Diagnostics.HealthChecks.HealthCheckLogScope". That doesn't seem to be possible though, as the filter callback doesn't provide those details.
Does anyone have any idea how to do more specific log filtering for cases like this?
I have looked at .net core log filtering on certain requests, but extending every ILoggerProvider I use isn't possible, since some are not public classes.

You could use Serilog for logging, as it also provides great filtering, enriching, and formatting capabilities with Serilog.Expressions Nuget package.
There is even a simple example provided in the link above for filtering health checks, which fulfilled my needs to fully filter out health check logging based on the request path '/health'. Following the guide, it only required adding appsettings.json configuration, after the serilog was configured as the application logger to make it work:
{
"Serilog": {
"Using": ["Serilog.Expressions"],
"Filter": [
{
"Name": "ByExcluding",
"Args": {
"expression": "RequestPath like '/health%'"
}
}
]
}
}

To filter out those requests you can try using the Log4Net StringMatchFilter filter with the value of the health URL.
Here is the code for the configuration file if health URL is just localhost/health:
<filter type="log4net.Filter.StringMatchFilter">
<stringToMatch value="/health" />
<acceptOnMatch value="false" />
</filter>

Related

CORS issue when calling API via Office Scripts Fetch

I am trying to make an API call via Office Scripts (fetch) to a publicly available Azure Function-based API I created. By policy we need to have CORS on for our Azure Functions. I've tried every domain I could think of, but I can't get the call to work unless I allow all origins. I've tried:
https://ourcompanydoamin.sharepoint.com
https://usc-excel.officeapps.live.com
https://browser.pipe.aria.microsoft.com
https://browser.events.data.microsoft.com
The first is the Excel Online domain I'm trying to execute from, and the rest came up during the script run in Chrome's Network tab. The error message in office Scripts doesn't tell me the domain the request is coming from like it does from Chrome's console. What host do I need to allow for Office Scripts to be able to make calls to my API?
The expected CORS settings for this is: https://*.officescripts.microsoftusercontent.com.
However, Azure Functions CORS doesn't support wildcard subdomains at the moment. If you try to set an origin with wildcard subdomains, you will get the following error:
One possible workaround is to explicitly maintain an "allow-list" in your Azure Functions code. Here is a proof-of-concept implementation (assuming you use node.js for your Azure Functions):
module.exports = async function (context, req) {
// List your allowed hosts here. Escape special characters for the regular expressions.
const allowedHosts = [
/https\:\/\/www\.myserver\.com/,
/https\:\/\/[^\.]+\.officescripts\.microsoftusercontent\.com/
];
if (!allowedHosts.some(host => host.test(req.headers.origin))) {
context.res = {
status: 403, /* Forbidden */
body: "Not allowed!"
};
return;
}
// Handle the normal request and generate the expected response.
context.res = {
status: 200,
body: "Allowed!"
};
}
Please note:
Regular expressions are needed to match the dynamic subdomains.
In order to do the origin check within the code, you'll need to set * as the Allowed Origins on your Functions CORS settings page.
Or if you want to build you service with ASP.NET Core, you can do something like this: https://stackoverflow.com/a/49943569/6656547.

Kafka HTTP Topic Producer configuration acks=all

How do we set acks=all for KAFKA HTTP Topic? I tried sending in the below JSON as "acks":"all" But it throws with an unrecognized property. I tried setting it in the header as well. But in Header when I set to any value like 0,1, all and abcd. It is accepting all the values. So I can't rely on the header. But from Java code, we can set this property as ProducerConfig.ACKS_CONFIG as key-value pair
{
"records": [
{
"value": { "name": "Firstname, lastname" },
"key":"123e4567-e89b-42d3-a456-5566424415123591"
}
]
}
Any Suggestions are helpful.
In case of Kafka REST Proxy, the producer instances are shared between clients i.e. the client(s) will connect to REST proxy instance(s) and those REST proxy instance(s) will in turn connect to the Kafka cluster (brokers).
The json that you have supplied is only going to contain data records.
The REST proxy layer will have the global settings for producers that you're looking for and clients will end up sharing these. So, if you've got access to the REST proxy instances then you could modify these parameters there directly.

Enable CORs for Swashbuckle swagger.json in .NET Lambda API

I have a .NET lambda API that I was previously using Swashbuckle to generate a swagger.json file that was given to an external site to use. I am now trying to setup so the swagger.json file is is generated by the API and available through a url for the external site to us ie: mylambdaapi.com/swagger/v2/swagger.json. I was able to get this working by adding a dummy event to my template when pushing to aws as follows.
"SwaggerJson": {
"Type": "Api",
"Properties": {
"Path": "/swagger/v2/swagger.json",
"Method": "GET"
}
}
This works for just accessing the file normally, however the external site will run into CORS "No 'Access-Control-Allow-Origin' header" issues when trying to load the json. Is there any way to force the generation to use "Access-Control-Allow-Origin" in this case? Or is this not feasible in this way? I'm working off what another developer had built previously so I'm trying not to rewrite every, however I'm open to another method as long as it is able to produce some swagger json that the external site can consume.
EDIT: I should note that I am using API gateway, hover the swagger.json is only used for documentation purposes for the external site.
Attempted to use UseCors() functionality however that did not work. I was able to fix the issue by adding an anonymous function to handle the response before UseSwagger.
The following snip-it is from the Configure function in my startup.
app.Use((context, next) =>
{
context.Response.Headers["Access-Control-Allow-Origin"] = "*";
return next.Invoke();
});
app.UseSwagger();

Apigee fault handling for CLASSIFICATION_FAILURE

In Apigee, can fault handling - specifying a FaultRule and a RaiseFault policy be used to handle and provide a custom message for:
{
"fault": {
"faultstring": "Not Found",
"detail": {
"errorcode": "CLASSIFICATION_FAILURE"
}
}
}
If this can be done, should the 'Condition' for the fault rule be 'fault.name = "CLASSIFICATION_FAILURE"'? I tried this and it is not working.
CLASSIFICATION_FAILURE is a system level failure to find an API Proxy for the given URL/URI. The request will not even reach the API proxy(hence the policies) - which is the precise complaint by the system.
So you do not want to handle an error like that.
Another way to approach this case is to have a catch all API proxy with basepath /** which will be invoked when there is no specific URL match. You can generate a custom message in this proxy - this can be the message you wanted to send across in case of classification failure.
Srikanth's answer on 30/05/2014 is only partially correct. Using a basepath /** did not work for us. Instead, we had to create an api proxy with basepath = /
Inside the proxy, we defined a RaiseFault in Preflow and that was it.

How to stop consumers from hitting invalid resources in APIGee API

I have an Apigee proxy that has two resources (/resource1 and /resource2). If tried to access /resource3. How do I return a 404 error instead of the Apigee default fault?
Apigee displays the below fault string:
{
"fault": {
"faultstring": "The Service is temporarily unavailable",
"detail": {
"errorcode": "messaging.adaptors.http.flow.ServiceUnavailable"
}
}
}
Thanks
Currently the way flows work in apigee this way - It parses through your default.xml (in proxy) and tries to match your request with one of the flow either through the path-suffix like "/resource1, /resource2" or VERB or any other condition you might have. If it does not find any matching condition, it throws the error like above.
You can add a special flow which will be kicked in if the condition matches none of the valid flows you have. You can add a raisefault policy in that flow and add a custom error response through that flow.
A better solution is to:
be sure to define something in the base path of all Proxy APIs
create an additional Proxy API called "catchall" with a base path of "/" and with just a Raise fault throwing a 404
Apigee execute Proxy APIs from longest Base Path to shortest; the catchall will run last and always throw back a 404
I just want to clarify Vinit's answer. Vinit said:
If it does not find any matching condition, it throws the error like above.
Actually, if no matching flow condition is found, the request will still be sent through to the backend. The error you mentioned:
{
"fault": {
"faultstring": "The Service is temporarily unavailable",
"detail": {
"errorcode": "messaging.adaptors.http.flow.ServiceUnavailable"
}
}
}
was returned after attempting to connect to the backend without matching a flow.
Vinit's solution to raise a fault to create the 404 is the best solution for your requirements.
In some cases, however, it is appropriate to pass all traffic through to the backend (for example, if you don't need to modify each resource at the Apigee layer, and you don't want to have to update your Apigee proxy every time you add a new API resource). Not matching any flow condition would work fine for that use case.

Resources