Mask sensitive data in Mule4 before logging - mule-esb

I have already gone through the below link to mask/hide/encrypt sensitive data before logging but unable to do so using Mule4. Can someone please share if implemented or suggestions?
http://bushorn.com/encrypting-a-json-element/

The link is no longer available. I'll assume it is a method base on some log4j2 interception that will not work on Mule 4. What you can do is create a custom logging component with the Mule SDK and apply the same logic there, avoiding trying to intercept Mule 4 logging.

Related

Dynamic Flex Workflow in Studio

As part of my Twilio Studio flow I am using the HTTP request step to get data from an external database that responds back with the agent skill to be used when routing the call however I am unable to figure out how to use this variable in the send to flex step. Is it possible to dynamically set the workflow in the send to flex step from a variable?
I found the solution passing Json in the send to flex step and using filters at the workflow level.

Application Insights end-to-end multi component logging in Azure Functions

End-to-end logging between multiple components using Application Insights (AI) is based on a hierarchical Request-Id header. So each component is responsible to honer a possible incoming Request-Id. To get the full end-to-end hierarchical flow correct in Application Insights the Request-Id header needs to be used as the AI Operation.Id and Operation.ParentId (as described here).
But when making a request with a Request-Id header to a Azure Function using a HttpTrigger binding for example (Microsoft.NET.Sdk.Functions 1.0.24) with integrated Application Insights configured (as described here) a completely new Operation.Id is created and used - causing the whole flow in AI to be lost. Any ideas on how to get around this?
Setting up a separate custom TelemetryClient might be an option. But seems to require a lot of configuration to get the full ExceptionTrackingTelemetryModule and DependencyTrackingTelemetryModule right - especially when using Functions v2 and Core (ref to AI config). Anyone go that successfully working?
This is not yet supported by Functions but should start working sometime early next year.
If you want to hack it through, you can add a reference to ApplicationInsights SDK for AspNetCore (v 2.4.1) and configure RequestTrackingTelemetryModule.
static Function1()
{
requestModule = new RequestTrackingTelemetryModule();
requestModule.Initialize(TelemetryConfiguration.Active);
}
private static RequestTrackingTelemetryModule requestModule;
This is pretty sketchy, not fully tested and has drawbacks. E.g. request collected is no longer augmented with functions details (invocation id, etc). To overcome it you need to get the real TelemetryConfiguration from the Function dependency injection container and use it to initialize the module.
It should be possible, but is blocked by some issue.
But even with the code above, you should get requests that respect incoming headers and other telemetry correlated to the request.
Also, when out-of-the-box support for correlation for http request is rolled out, this may break. So this is a hacky temporary solution, use it only if you absolutely have to.

Governance Registry REST API Adding states

How to get the governance registry lifecycles using this api https://localhost:9443//governance/restservices in payload of json.I am getting all the field but i am not getting lifecycle value in that api.Can anyone suggest how to get that.I also see this api to get lifecycle state GET https://localhost:9443/ /governance/restservices/44dadw4/states but I dont want to use this api because I have to call for each time this api to get its lifecycle state.I want to use this api https://localhost:9443//governance/restservices and get the rest service lifecycle value in json payload.
Please help i am searching in web for this approach but not getting enough result.
There is no way getting LC details from https://localhost:9443//governance/restservices using vanilla WSO2 G-Reg. This endpoint is used to retrieve rest service artifact metadata.
I don't understand your point, why you can't use https://localhost:9443/ /governance/restservices/44dadw4/states endpoint for this. Documentation.
However, You can achieve this using a custom deployment. If you want to get LC details from this endpoint you have to update the existing API code. Basically you need to update Asset.java and deploy the new governance.war file.

Adding correlation id to automatically generated telemetry with App Insights

I'm very new to Application Insights, and I'm thinking of using it for a set of services I plan on implementing with asp.net webapi. I was able to get the basic telemetry up and running very easily (right-clicking on a project on VS, Add Application Insights), but then I hit a block. I plan to have a correlation id set in the request headers for calls to downstream services, and I would like to tag all the telemetry related to one outside call with the same correlation id.
So far I've found that there is a way to configure a TelemetryInitializer, but if I understood correctly, this is run before I get to access the request, meaning I can't check if there is a correlation id that I should attach.
So I guess there might be 2 ways to solve this: 1) if I can somehow actually get access to the request headers before the initializer, that would obviously solver the problem, or 2) somehow get a hold of the TelemetryClient instance that is used to report the automatically generated telemetry.
Perhaps the last resort would be to turn off all of the automatic stuff and do all of it manually, when I could of course control what properties are set on the TelemetryClient. But this would be quite a lot more work, so I'd prefer to find some other solution.
You were rights saying that you should use TelemetryInitializer. All TelemetryInitializers are called when Track method is called on any telemetry item. Autogenerated request telemetry is "tracked" on request OnEnd, you should have all your custom headers available for you at that time.
Please also have a look at OperationId - this is part of the standard context managed by App Inisghts and is used exactly for the purpose of correlating requests with downstream execution. This is created and passed automatically, including traces (if you use trackTrace).
Moreover, we have built-in support in our UX for easily seeing all telemetry for a particular operation - it can be found in "Search->Details-->Related Items-->All telemetry for this operation"

What's does IMetadataExchange endpoint actually do?

I'm working on a web service that uses ASP.NET security model (i.e. with AspNetCompatibilityRequirements set to allowed). Like many others, I got an error saying the Anonymous access is required because the mexHttpBinding requires it and the only way to get around it is to remove the mex endpoint from each service as described here:
WCF - Windows authentication - Security settings require Anonymous
I thought by removing mex endpoint I will no longer able to generate WSDL or add a reference to the service from Visual Studio but to my surprise everything still works. I quickly googled the "mex binding" but most web sites just say it's for "Metadata Exchange" without going into too much detail on what it actually does.
Can anyone tell me what's the side effect of removing the mex binding?
If your WCF service does not expose service metadata, you cannot add a service reference to it, neither from within Visual Studio (Add Service Reference), nor will another client be able to interrogate your service for its methods and the data it needs.
Removing Metadata Exchange (mex) basically renders the service "invisible", almost - a potential caller must find out some other way (e.g. by being supplied with a WSDL file, or by getting a class library assembly with a client he can use) about what the service can do, and how.
This might be okay for high risk environment, but most of the time, being able to interrogate the service and have it describe itself via metadata is something you want to have enabled. That's really one of the main advantages of a SOAP based service - by virtue of metadata, it can describe itself, its operations, all the data structures needed. That feature is used to make it really easy to call that service - you just point to the mex endpoint, and you can find out all you need to know about that service.
Without the metadata exchange, you won't be able to use svcutil.exe to automatically generate the proxy classes.

Resources