I have legacy .NET 4.6.2 application that had to be monitored with OpenTelemetry. After going back and forth with implementation, I managed to send traces to Kibana, however the most of the them contain endpoint addresses like this - api/{controller}/{action}, and not the actual endpoint.
I am investigating this and only in couple of occasions here and there is a word about OpenTelemetry is not handling properly endpoints that have route set in a attribute, like:
[Route("api/products/{productCode}")]
public HttpResponseMessage GetProductData(string code)
{
\\some code
}
There is no details how to be fixed, or I am not able to find them.
Is there anything that could be done in this case? Legacy system that I am working on contains around 20 controllers and more than 80 endpoints, so managing routes would be very challenging.
Edit: I have forgot to mention that this behavior doesn't happened in console exporter.
Related
End-to-end logging between multiple components using Application Insights (AI) is based on a hierarchical Request-Id header. So each component is responsible to honer a possible incoming Request-Id. To get the full end-to-end hierarchical flow correct in Application Insights the Request-Id header needs to be used as the AI Operation.Id and Operation.ParentId (as described here).
But when making a request with a Request-Id header to a Azure Function using a HttpTrigger binding for example (Microsoft.NET.Sdk.Functions 1.0.24) with integrated Application Insights configured (as described here) a completely new Operation.Id is created and used - causing the whole flow in AI to be lost. Any ideas on how to get around this?
Setting up a separate custom TelemetryClient might be an option. But seems to require a lot of configuration to get the full ExceptionTrackingTelemetryModule and DependencyTrackingTelemetryModule right - especially when using Functions v2 and Core (ref to AI config). Anyone go that successfully working?
This is not yet supported by Functions but should start working sometime early next year.
If you want to hack it through, you can add a reference to ApplicationInsights SDK for AspNetCore (v 2.4.1) and configure RequestTrackingTelemetryModule.
static Function1()
{
requestModule = new RequestTrackingTelemetryModule();
requestModule.Initialize(TelemetryConfiguration.Active);
}
private static RequestTrackingTelemetryModule requestModule;
This is pretty sketchy, not fully tested and has drawbacks. E.g. request collected is no longer augmented with functions details (invocation id, etc). To overcome it you need to get the real TelemetryConfiguration from the Function dependency injection container and use it to initialize the module.
It should be possible, but is blocked by some issue.
But even with the code above, you should get requests that respect incoming headers and other telemetry correlated to the request.
Also, when out-of-the-box support for correlation for http request is rolled out, this may break. So this is a hacky temporary solution, use it only if you absolutely have to.
We've been experiencing some deadlocks when working with interconnected ASP.NET WebApis on the same IIS server. We'd like to know if this is somehow an expected behavior, due to hosting all APIs on the same server and same Application Pool, since we have managed to avoid the issue by moving either WebApi to a different pool; or if there's something wrong with our code.
For production, we will probably host the APIs on different server or pools, but still we'd like to understand why this is happening. Our main concern is that, if it's our faulty code, the issue may be reproduced on a larger scale, even if the hosting setup is correct.
We have created a little solution to reproduce the deadlock, hosted in GitHub.
The reproduction steps are as follow:
WebClient executes multiple HTTP request in parallel WebApi1.
WebApi1 executes an HTTP request to WebApi2.
WebApi2 executes an HTTP request to WebApi3.
WebApi3 simply returns a string.
The expected behavior would be that all requests are eventually resolved.
The actual behavior is that, certain requests gets completed, while some others will fail, due to a TaskCancelledException which seems to be due to the requests timing out.
The only article that I was able to find that seems to mention the same issue is from 2014: "Do Not Send ServerXMLHTTP or WinHTTP Requests to the Same Server", I believe that this is the issue we are experiencing, how can we confirm this?
Context
We've been assigned the task to create a centralized authentication server for multiple internal APIs of the company we work at.
We are using IdentityServer3 with reference tokens, so when some API requests a second API using reference tokens, the second API will request the authentication server for token validation which reproduces the issue.
I have added the IdentityServer tag, since this could be a common issue when doing multiple APIs communication and using reference tokens. Sample on GitHub.
Just one observation: you are using HttpClient as a static member for every Controller and according to this HttpClient is not guaranteed to be thread-safe
I am currently building an ASP.NET Web API which unique purpose is to provide data to a Windows Phone app. I already finished the Web API development and I published it in an Azure website for testing purpose.
It works like a charm but my issue now is that this Web API is now publicly accessible. What I would like is to find the simplest way to limit the audience to my particular Windows Phone app and nobody else. I first thought of using an API key but it does not seem that ASP.NET proposes this as a builtin option. The builtin options are not satisfying either because their require a login.
Basically I want that only my Windows Phone app can access the Web API and that this authorization is transparent for the user (no authentication required). Any suggestions?
PS: The Web API is deployed on Azure, will not be distributed to other developers and securing with HTTPS is a possibility.
I did an implementation and blogged about it : http://www.ucodia.fr/blog/simple-authorization-asp-net-web-api/
As you're looking for a very basic, stop the casual hacker method, there is a simple technique that can be used with the Web API and ActionFilterAttributes. As there are two identically named attribute classes with that name, make sure you're using the one from System.Web.Http.Filters.
It would be very common to send a token of some sort, that would either be generated, or encoded permanently into the application. So, in this example, a header value would be encoded into every request, and then checked using the filter.
[VerifyToken]
public class MyAPI : ApiController
{
}
public class VerifyTokenAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(HttpActionContext filterContext)
{
// get the token from wherever you'd like ...
var token = filterContext.Request.Headers.GetValues("Token").First();
if (token != GetCurrentToken())
{
filterContext.Response =
new HttpResponseMessage(HttpStatusCode.Unauthorized);
}
base.OnActionExecuting(filterContext);
}
}
Does it need to be truly secure? or just secure enough?
For 'secure enough', I will usually just use an API key as you suggest that the client app passes to the back-end and for which the back-end will not respond if the key doesn't match. This will provide a very basic level of security to prevent someone who just happens along from executing arbitrary api calls; for read only, non-confidential data this has usually been enough for me.
For a bit of added level of security you could you a rotating time-sensitive api key that changes based on the time of the day (with perhaps a sliding window to account for minor clock differences between clients and server). This ups the security by obscurity just a bit more, requiring someone to do a bit more work before they crack your code.
Depending on your app, it might also be a good idea to consider rate throttling your API responses as well, if there is a discernible max number of calls any given client should make to your API, you could geometrically slow down the responses to thwart attempts at misuse if someone does bypass your security.
I'm receiving an error when I'm attempting to consume a web service:
Cannot read the token from the 'Timestamp' element with the 'http://docs.oasis- open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd' namespace for BinarySecretSecurityToken, with a '' ValueType.
Not quite sure
The client is an asp.net web application, making a call. From Wireshark, one can see the post going in, and the response coming back, but then it errors out like this.
To give some background, this is a WCF calling on a java served web service.
You may need to add a security timestamp soap header to the message. Look at this SO question where they had the opposite problem but it may be helpful to look at their configuration. Also, you may save yourself some grief if you can use one of the WCF Interop Express bindings for accessing a java service implementing WS-Security.
I've been working on building a set of enterprise services using WCF 4 within my organization and could use some guidance. The setup/architecture I've designed thus far is similar to a lightweight custom ESB. I have one main "broker" service (using wsHttp), that connects to three underlying netTcp services. Both the broker and underlying services share a common assembly that contains the model, as well as the contract interfaces. In the broker service I can choose which operations from the underlying services I want to expose. The idea is that potentially we can have a core of set of services and few different brokers on top of them depending on the business need. We plan on hosting everything (including the netTcp services) in IIS 7.5 leveraging AppFabric and WAS.
Here's my question, is such a design good practice and will it scale? These services should be able to handle thousands of transactions per day.
I've played around with the routing in WCF 4 in lieu of the broker service concept I've mentioned, however, have not seen much value in it as it just simply does a redirect.
I'm also trying to figure out how to optimize the proxies that the broker service (assuming this practice is advisable) has to the underlying services. Right now I simply have the proxies as private members within the brokers main class. Example:
private UnderlyingServiceClient _underlyingServiceClient = new UnderlyingServiceClient();
I've considered caching the proxy, however, am concerned that if I run into a fault that the entire proxy at that point would be faulted and cannot be reused (unless I catch the fault and simply re-instantiate).
My goal with these services is to ensure that the client consuming them can get "in and out" as quickly as possible. A quick request-reply.
Any input/feedback would be greatly appreciated.
If i understand you correctly, you have a handful of "backend" services, possibly on separate computers. Then you have one "fontend" service, which basically acts like a proxy to the backend, but fully customizable in code. We are doing this exact setup with a few computers in a rack. Our frontend is IIS7, backend is a bunch of wcf services on several machines.
One, will it scale? Well, adding more processing power on the backend is pretty easy, and writing some load balancing code isn't too bad either. For us, the problem was the frontend was getting bogged down, even though it was only acting as a proxy. We ended up adding a couple more front end computers, "brokers" as you call them. That works very well. People have suggested that I use Microsoft ForeFront for automatic load balancing, but I have not researched it yet.
Two, should you cache the proxy? I would say definitely yes, but it kind of sucks. These channels DO fault occasionally. I have a thread always running in the background. Every 3 seconds, it wakes up, checks all the wcf services and wcf clients in the app. Any that are faulted get destroyed and recreated.
check host channels: ...
while(true)
{
try{if(MyServiceHost.State!=System.ServiceModel.CommunicationState.Opened) {ReCreate();}} catch{}
System.Threading.Thread.Sleep(3000);
}
check client channels: ...
private static ChannelFactory<IMath> mathClientFactory = new ChannelFactory<IMath>(bindingHttpBin);
while(true)
{
try
{
if(MyServiceClient.State==System.ServiceModel.CommunicationState.Faulted)
{
EndpointAddress ea = new EndpointAddress(ub.Uri);
ch = WcfDynamicLan.mathClientFactory.CreateChannel(ea);
}
}
catch{}
System.Threading.Thread.Sleep(3000);
}
On the client, I not only cache the channel, but also cache the ChannelFactory. This is just for convenience though to make the code for creating a new channel shorter.