My .Net Framework Web API has a few endpoints, all structured in a similar way.
Endpoint1 - /User/{userId}/Resource1/{resourceId}/...
Endpoint2 - /User/{userId}/Resource2/{resourceId}/...
This pattern can be more sub-resources deep.
However, this turns out to be a bit problematic in the various tools in Application Insights. The errors blade show each operation by their individual URL. Not grouped by endpoint.
I can easily see if /User/123/Resource1/abc/ is causing errors but it is difficult to tell if that particular endpoint is problematic. Makes sense?
Is there a way to inform Application Insights to be more smart in how it groups operations in the UI and the other tools?
You can use telemetry initializer to modify OperationName. In your case you can replace actual identifiers with "{userId}" and "{resourceId}"
Related
I have a web service that is on an internal server. It can be called from any website on our network.
More and more developers are starting to use it. Current probably 20+ pages use this service, and the number is growing fast. I can see a year from now, someone asking what pages are using this service and what methods.
I would like to log the url of the pages that use my web service as the request come in.
It would also be nice to know the method they are calling.I need to do something in such a way, that it does not affect the client web sites.My first thought was that I could write some code in the global.asax.
I have added some code to the Application_BeginRequest to log the request object details, but there does not appear to be anything about the requesting url.
What am I missing? Should I be looking at a different object?
Thanks.
Without disrupting existing users this is going to be difficult. The httpContect.Current.RequestUrl will just return the URL used to call your web service, not which web page called it.
The closest you can do without disrupting existing apps and forcing developers to change them is to grab the HttpContext.Current.Request.UserHostAddress, so you can at least get the IP of the machine calling your service.
Beyond this, what you might want to consider is adding a parameter to your functions for "CallingApp" and then log that in your code. That's pretty much what we did once re realized that we needed to know which apps are calling our service. We actually have an application monitoring service that uses a GUID for every new app we develop, and we pass that GUID to any web service. It[s extra work but to us it was critical because it allows us to know which apps will be affected when we need to perform updates or take the app server down for maintenance.
Edit - added
As a side note, at the point we realized we needed to track this, we had already been using web services for about a year. When faced with the same problem, we created a new set of web services, and included the extra field for the calling app in all of the new services, and then slowly went back and changed the older programs to point to the new services.
IN retrospect, we wish we had known we would need to do this up front because it created a lot of extra work. I'm guessing you'll be facing something similar if you really want to know exactly who is calling your services.
The only thing you can probably retrieve from the consumer is the IP address without changing your interface.
If you can change this you could do this e.g. by adding authentication and logging who is calling what, or by having some simple "token" principle.
However both methods require you to change the interface and therefore break backwards compatibility - which you should never do.
By always ensuring both back and forward compatibility you should not need to know exactly who is calling your service, but only that it is actually used.
#David Stratton
Thanks for your help. I think your suggestions were great. I accually did something very different, after your answer gave me some new ideas.
I should have mentioned that I was generating the web proxy that most of my users were using to make calls against my web service. My client in general do NOT use the proxy that Visual Studio creates.
Here is what did:
I generated my web proxy client again, and added calls to log the httpcontext of the client before every call. Because the proxy is running on the client, he had access to everything I needed. That allowed me to record everything about the client and the specific call they were making. I realize this would not work for most cases. But all of my clients are internal web sites.
It also had the advantage in that the clients did not have to modify their code at all. I just gave them all a new DLL. Problem solved. I get all the tracking data I want, and they did not have to modify their code.
I was stuck trying to solve the problem from the web service's point of view.
I realize that there is still a whole in this implementation, because someone does not have to use my client proxy to call my service. I guess I could force that at some point in the future. For now, they could let Visual Studio genereate a web proxy for my service. However, if they do that I guess I don't care. That is not the recommened way to call my service. I think the only one doing that is an ASP.NET 1.1 web site. When they upgrade, they will probably switch to my generated proxy.
Without implementing some sort of authentication, there isn't a guraenteeted way of knowing exactly who is calling your service - web metrics are the only way you can gauge what volume of traffic is hitting your service.
I'm sure you already know this but the whole point of a web service isn't to know or care who is calling it.
I have successfully used ...
Dim strReferrer As String = HttpContext.Current.Request.UrlReferrer.AbsoluteUri
to get the calling page that called my WEB API 2 Web Service.
When I talk to an developer from the Microsoft ASP.NET world and he uses the word "Webservice", does that word in every case imply a specific data format (XML? SOAP?)?
Or is it just anything you can call via http(s)?
In my view, it can be anything that's over http/https, and intended for calling by an application rather than a user's browser.
In particular, REST and SOAP are quite different about how they pass arguments in and get results back
The term Webservice itself is language-agnostic.
This is a decent overview.
If an Asp.Net developer says WebService, you can pretty much bet that they are talking about XML/SOAP.
However this is not universally true. I think it's just fine to call anything a WebService if 1) the data source is available via the web or 2) it is a web address that can provide back information given a set of inputs.
For example, StackOverflow.com allows for screen scraping of the User pages in order for 3rd party applications to be built. It's not specifically XML/SOAP but I would consider it a Web Service (format #1)
In my experience this completly depends upon who you are talking to. For some ASP.Net developers this is only SOAP for others it includes other things like REST. If you are planning on using the term in a specification it would be a good idea to be a bit more specific.
I can only agree with Paul, anything queried over the web, using the http(s) protocol and not browser oriented. But any web service should also have the functionality of being discovered (WDSL and so on).
Personally I mean any HTTPHandler!
That means under ASP.NET, a page is a webservice that returns HTML.
WCF extends that concept, because by default, WCF service requests in ASP.NET are processed by Modules not Handlers.
So really any web request is a service.
Typically though ASP.NET developers will be refering to SOAP unless they prefix i.e. WCF Webservices,
Ok, so I'm looking for a bit of architecture guidance, my team is getting a chance to re-cast certain decisions with a new feature that we're building, and I wanted to see what SO thought :-) There are of course certain things that we're not changing, so the solution would have to fit in this model. Namely, that we've got an ASP.NET application, which uses web services to allow users to perform actions on the system.
The problem comes in because, as with many systems, different users need access to different functions. Some roles have access to Y button, and others have access to Y and B button, while another still only has access to B. Most of the time that I see this, developers just put in a mish-mosh of if statements to deal with the UI state. My fear is that left unchecked, this will become an unmaintainable mess, because in addition to putting authorization logic in the GUI, it needs to be put in the web services (which are called via ajax) to ensure that only authorized users call certain methods.
so my question to you is, how can a system be designed to decrease the random ad-hoc if statements here and there that check for specific roles, which could be re-used in both GUI/webform code, and web service code.
Just for clarity, this is an ASP.NET web application, using webforms, and Script# for the AJAX functionality. Don't let the script# throw you off of answering, it's not fundamentally different than asp.net ajax :-)
Moving from the traditional group, role, or operation-level permission, there is a push to "claims-based" authorization, like what was delivered with WCF.
Zermatt is the codename for the Microsoft class-library that will help developers build claims-based applications on the server and client. Active Directory will become one of the STS an application would be able to authorize against concurrently with your own as well as other industry-standard servers...
In Code Complete (p. 411) Steve McConnell gives the following advice (which Bill Gates reads as a bedtime story in the Microsoft commercial).
"used in appropriate circumstances, table driven code is simpler than complicated logic, easier to modify, and more efficient."
"You can use a table to describe logic that's too dynamic to represent in code."
"The table-driven approach is more economical than the previous approach [rote object oriented design]"
Using a table based approach you can easily add new "users"(as in the modeling idea of a user/agent along with it's actions). Its a good way to avoid many "if"s. And I've used it before for situations like yours, and it's kept the code nice and tidy.
We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!
I am designing a error logging feature so our servers (each donig different things) can have a central data store for logging errors.
Would it be a good idea to have the various applications writing to the error log file using a WCF service, or is that a bad idea?
they can do it just by ADO.NET to the database, which I think is the simpler route.
How about having a look at syslog? It was made for exactly that purpose.
I'd say just log to your local data store. The advantages are :
Speed - it's pretty rapid to just
dump your chosen error report to an
existing data connection.
Tracability - What happens if you
have an error in your service? You
lose all ability to chase down
errors on all servers.
Simplicity - If you change the
endpoint for your errors service,
you have to update every other
application that uses the error
service.
Reporting - Do you really want to
trawl through error reports from
tens / hundreds of applications in
one place when you could easily find
them in the data store local to the
app?
Of course, any of these points could be viewed from the other side, these are just my opinions.
We're looking at a similar approach, except for audit logging as well as error handling.
Looking at using WCF over netTcp, also looking at using the event log, but that seems to require high trust settings, and maybe performance issues.
Not convinced by ZombieSheep's objections:
It's pretty rapid to dump your chosen error report over an existing WCF connection. Seriously. Plus, you can do it async/queued. Not a key factor for me.
You log to the central service and the local service. When the erroer service comes back on line, you poll your machines for events since the last timestamp. Problem solved.
Use a dns alias, and don't change the path - the way you should do internal addressing anyway IMO.
What if you have multiple apps on a single machine? What if you want to see the timing of errors across multiple apps?