Who is calling my WebService? - asp.net

I have a web service that is on an internal server. It can be called from any website on our network.
More and more developers are starting to use it. Current probably 20+ pages use this service, and the number is growing fast. I can see a year from now, someone asking what pages are using this service and what methods.
I would like to log the url of the pages that use my web service as the request come in.
It would also be nice to know the method they are calling.I need to do something in such a way, that it does not affect the client web sites.My first thought was that I could write some code in the global.asax.
I have added some code to the Application_BeginRequest to log the request object details, but there does not appear to be anything about the requesting url.
What am I missing? Should I be looking at a different object?
Thanks.

Without disrupting existing users this is going to be difficult. The httpContect.Current.RequestUrl will just return the URL used to call your web service, not which web page called it.
The closest you can do without disrupting existing apps and forcing developers to change them is to grab the HttpContext.Current.Request.UserHostAddress, so you can at least get the IP of the machine calling your service.
Beyond this, what you might want to consider is adding a parameter to your functions for "CallingApp" and then log that in your code. That's pretty much what we did once re realized that we needed to know which apps are calling our service. We actually have an application monitoring service that uses a GUID for every new app we develop, and we pass that GUID to any web service. It[s extra work but to us it was critical because it allows us to know which apps will be affected when we need to perform updates or take the app server down for maintenance.
Edit - added
As a side note, at the point we realized we needed to track this, we had already been using web services for about a year. When faced with the same problem, we created a new set of web services, and included the extra field for the calling app in all of the new services, and then slowly went back and changed the older programs to point to the new services.
IN retrospect, we wish we had known we would need to do this up front because it created a lot of extra work. I'm guessing you'll be facing something similar if you really want to know exactly who is calling your services.

The only thing you can probably retrieve from the consumer is the IP address without changing your interface.
If you can change this you could do this e.g. by adding authentication and logging who is calling what, or by having some simple "token" principle.
However both methods require you to change the interface and therefore break backwards compatibility - which you should never do.
By always ensuring both back and forward compatibility you should not need to know exactly who is calling your service, but only that it is actually used.

#David Stratton
Thanks for your help. I think your suggestions were great. I accually did something very different, after your answer gave me some new ideas.
I should have mentioned that I was generating the web proxy that most of my users were using to make calls against my web service. My client in general do NOT use the proxy that Visual Studio creates.
Here is what did:
I generated my web proxy client again, and added calls to log the httpcontext of the client before every call. Because the proxy is running on the client, he had access to everything I needed. That allowed me to record everything about the client and the specific call they were making. I realize this would not work for most cases. But all of my clients are internal web sites.
It also had the advantage in that the clients did not have to modify their code at all. I just gave them all a new DLL. Problem solved. I get all the tracking data I want, and they did not have to modify their code.
I was stuck trying to solve the problem from the web service's point of view.
I realize that there is still a whole in this implementation, because someone does not have to use my client proxy to call my service. I guess I could force that at some point in the future. For now, they could let Visual Studio genereate a web proxy for my service. However, if they do that I guess I don't care. That is not the recommened way to call my service. I think the only one doing that is an ASP.NET 1.1 web site. When they upgrade, they will probably switch to my generated proxy.

Without implementing some sort of authentication, there isn't a guraenteeted way of knowing exactly who is calling your service - web metrics are the only way you can gauge what volume of traffic is hitting your service.
I'm sure you already know this but the whole point of a web service isn't to know or care who is calling it.

I have successfully used ...
Dim strReferrer As String = HttpContext.Current.Request.UrlReferrer.AbsoluteUri
to get the calling page that called my WEB API 2 Web Service.

Related

use webservice in same project or handle it with code?

This is a theoretical question.
imagine an aspnet website. by clicking a button site sends mail.now:
I can send mail async with code
I can send mail using QueueBackgroundWorkItem
I can call a ONEWAY webservice located in same website
I can call a ONEWAY webservice located in ANOTHER website (or another subdomain)
none of above solutions wait for mail operation to be completed.so they are fine.
my question is why I should use service solution instead of other solutions. is there an advantage ?
4th solution adds additional tcpip traffic to use service its not efficient right ?
if so, using service under same web site (3rd solution) also generates additional traffic. is that correct ?
I need to understand why people using services under same website ? Is there any reason besides make something available to ajax calls ?
any information would be great. I really need to get opinions.
best
The most appropriate architecture will depend on several factors:
the volume of emails that needs to be sent
the need to reuse the email sending capability beyond the use case described
the simplicity of implementation, deployment, and maintenance of the code
Separating out the sending of emails in a service either in the same or another web application will make it available to other applications and from client side code. It also adds some complexity to the code calling the service as it will need to deal with the case when the service is not available and handle errors that may occur when placing the call.
Using a separate web application for the service is useful if the volume of emails sent is really large as it allows to offload the work to one or servers if needed. Given the use case given (user clicks on a button), this seems rather unlikely, unless the web site will have really large traffic. Creating a separate web application adds significant development, deployment and maintenance work, initially and over time.
Unless the volume of emails to be sent is really large (millions per day) or there is a need to reuse the email capability in other systems, creating the email sending function within the same web application (first two options listed in the question) is almost certainly the best way to go. It will result in the least amount of initial work, is easy to deploy, and (perhaps most importantly) will be the easiest to maintain.
An important concern to pay significant attention to when implementing an email sending function is the issue of robustness. Robustness can be achieved with any of the possible architectures and is somewhat of an different concern as the one emphasized by the question. However, it is important to consider the proper course of action needed if (1) the receiving SMTP refuses the take the message (e.g., mailbox full; non-existent account; rejection as spam) and (2) an NDR is generated after the message is sent (e.g., rejection as spam). Depending on the kind of email sent, it may be OK to ignore these errors or some corrective action may be needed (e.g., retry sending, alert the user at the origination of the emails, ...)

Secured WCF service timing out on 2nd invocation of client channel

We have a secured & authenticated WCF service which cannot use service references. Thus, we provide the interface for the contracts and open client channel manually.
We have found out that as long we open it once, everything works fine. We can call several methods several times. However, if the channel is closed or just set to a new instance, the Login() (which happens to be required for first step prior to using the service), times out.
To make the matters even more mysterious, this only happens on our production server. If I run the same project locally, I am able to login many times as I want. Consuming the methods inside a web browser (even on a code-behind ASPX page) do not have this problem even with the production server. ONLY when it's a .NET client trying to open a client channel against the production server, do we have this problem.
We are not even sure where to start looking. Any advices would be greatly appreciated.
UPDATE:
As per #Rene's suggestion, we turned on logging on both sides. From client's log, there is a record of error which is basically the same timeout error we already got via the exception. Nothing meaningful. On the server's logs, there are records of service methods being invoked successfully even after 2nd login() and from server's POV, the request is served.
Additionally, I discovered that I could not even reproduce this issue on my machine using same test project to reproduce this problem. This reproduces on my developer's machine. I verified that we were at same version of .NET framework and Visual Studio. It has to be surely a client-side problem. What could be it?
In case anyone else is looking for answer, we finally found it -- the issue is due to the need to set on client's side System.Net.ServicePointManager.DefaultConnectionLimit to some higher value. The default value is 2 but in reality this allows only one proxy to be created and be usable. Setting it to 3 would allow 2 proxies to be created & be used.

How do I handle use 100 Continue in a REST web service?

Some background
I am planning to writing a REST service which helps facilitate collaboration between multiple client systems. Similar to how git or hg handle things I want the client to perform all merging locally and for the server to reject new changes unless they have been merged with existing changes.
How I want to handle it
I don't want clients to have to upload all of their change sets before being told they need to merge first. I would like to do this by performing a POST with the Expect 100 Continue header. The server can then verify that it can accept the change sets based on the header information (not hard for me in this case) and either reject the request or send the 100 Continue status through to the client who will then upload the changes.
My problem
As far as I have been able to figure out so far ASP.NET doesn't support this scenario, by the time you see the request in your controller actions the POST body has normally already been completely uploaded. I've had a brief look at WCF REST but I haven't been able to see a way to do it there either, their conditional PUT example has the full request body before rejecting the request.
I'm happy to use any alternative framework that runs on .net or can easily be made to run on Windows Azure.
I can't recommend WcfRestContrib enough. It's free, and it has a lot of abilities.
But I think you need to use OpenRasta instead of WCF in order to do what you're wanting. There's a lot of stuff out there on it, like wiki, blog post 1, blog post 2. It might be a lot to take in, but it's a .NET framework thats truly focused on being RESTful, and not RPC like WCF. And it has the ability work with headers, like you asked about. It even has PipelineContributors, which have access to the whole context of a call and can halt execution, handle redirections, or even render something different than what was expected.
EDIT:
As far as I can tell, this isn't possible in OpenRasta after all, because "100 continue is usually handled by the hosting environment, not by OR, so there’s no support for it as such, because we don’t get a chance to respond in the asp.net pipeline"

How to invoke code within a web app that isn't externally open?

Say, for example, you are caching data within your ASP.NET web app that isn't often updated. You have another process running outside of the app which ocassionally updates this data, when you do this you would like the cached data to be cleared immediately so that the next request picks up the new data straight away.
The caching service is running in the context of your web app and not externally - what is a good method of calling into the web app to get it to update the cache?
You could of course, just hack a page or web service together called ClearTheCache that does it. This can then be called by your other process. Of course you don't want this process to be externally useable or visible on your web app, so perhaps you could then check that incoming requests to this page are calling localhost, if not throw a 404. Is this acceptable? Could this be spoofed at all (for instance if you used HttpApplication.Request.Url.Host)?
I can think of many different ways to go about this, mainly revolving around creating a page or web service and limiting requests to it somehow, but I'm not sure any are particularly elegant. Neither do I like the idea of the web app routinely polling out to another service to check if it needs to execute something, I'd really like a PUSH solution.
Note: The caching scenario is just an example, I could use out-of-process caching here if needed. The question is really concentrating on invoking code, for any given reason, within a web app externally but in a controlled context.
Don't worry about the limiting to localhost, you may want to push from a different server in future. Instead share a key (asymmetrical or symmetrical doesn't really matter) between the two, have the PUSH service encrypt a block of data (control data for example) and have the receiver decrypt. If the block decrypts correctly and the data is readable you can safely assume that only the service that was supposed to call you has and you can perform the required actions! Not the neatest solution, but allows you to scale beyond a single server.
EDIT
Having said that an asymmetrical key would be better, have the PUSH service hold the private part and the website the public part.
EDIT 2
Have the PUSH service put the date/time it generated the cipher text into the data block, then the client can be sure that a replay attack hasn't taken place by ensuring the date/time is within an acceptable time period (say a minute).
Consider an external caching mechanism like EL's caching block, which would be available to both the web and the service, or a file to cache data to.
HTH.

Is it wrong to switch client logic in the service tier?

We have two client apps (a web app and an agent app) accessing methods on the same service, but with slightly different requirements. My team wants to control behaviour on the service side by passing in a ApplicationType parameter to every method - which is essentially an enum containing the name of the calling client application - which is then used as a key for a database lookup to configure the service with client-specific options.
Something about this makes me uneasy as I don't think the service should really have to be aware of which client is calling it. I'm being told that it's easier to do it this way than pass a load of options dynamically through the method call.
Is there anything wrong with the client application telling the service who they are? Or is there really no difference between passing a config key versus a set of parameterized options?
One immediate problem I can see is that if we ever opened the service to another client run by a third party, we'd have to maintain their configuration settings locally for them. At the moment we own both client apps so it's not so much of a problem.
How would you do it?
In a layered solution, you should always consider your layers as onion-like layers, and dependencies should always go inwards, never outwards.
So your GUI/App layer should depend on the businesslogic layer, the businesslogic layer should depend on the data access layer, and similar.
Unless you categorize the clients (web, win, wpf, cli), or generalize it with client profiles (which client applications can configure), I would never pass in the name of the calling application, as this would make the business logic layer aware of and dependent upon the outside layer.
What kind of differences are we talking about that would depend on the type of application? If you elaborate a bit on the differences here, perhaps someone can come up with some helpful advice on other ways to solve this.
But I would definitely look for other ways before going down your described path.
Can't you create two different services, one for each application? The two services will share a lot of code or call a single internal service with different parameterization depending on what outer service was called.
From a design perspective, this is no different than having users with different profiles. From a security perspective, I hope your applications are doing something to identify themselves, lest users of one application figure out a way to invoke the other applications logic as a hack. (Image a HR application being used by the mafia and a bank at the same time, one customer would be interesting in hacking the other customer's application on a shared application host)
In .net the design doesn't feel this way because the credentials live on the thread (i.e. when you set the IIPrincipal, that info rides on the thread-- it is communicated along with each method call, but not as a parameter.)
Maybe what you are looking for in terms of a more elegant design is an ApplicationIdentity attribute. You'd have to write a custom one, I don't know of one in the framework right now.
This is a hard topic to discuss without a solid example.
You are right for feeling that way. Sending in the client type to change behaviour is not correct. It's not a bad idea for logging... but that's about it.
Here is what I would do:
Review each method to see what needs to be different and why.
Create different methods for different usages. The method name should be self explanatory. If you ever need to break compatibility, you have more control (assuming you're not using a versioning system which would be overkill for an in-house-only service).
In some cases request parameters (flags/enum values) are more appropriate.
In some cases knowing the operating environment is more appropriate (especially for data security). The operating environment almost always sent during a login request. Something like "attended"/"secure" (agent client) vs "unattended"/"not secure" (web client). Now you must exchange a session key (HTTP cookie or an application level session id). Sessions obviously doesn't work if you need to be 100% stateless -- especially if you want to scale-out without session replication... if you have that requirement, send a structure in every request.
Think of requests like functions in your code. You wouldn't put a magic parameter that changes the behaviour of the function. You would create multiple functions that each behave differently. Whoever is using the function makes the decision which one to call.
So why is client type so wrong? Client type has no specific meaning on its own. It has many meanings and they may change over time. It's simply informational which is why it is a handy thing to log. An operating environment does have a specific meaning.
Here is a scenario to consider: What if a new client type is developed that is slightly different in a way that would break compatibility with the original request? Now you have two requests. 2 clients use Request A and 1 client uses Request B. If you pass in a client type to each request, the server is expected to work for every possible client type. Much harder to test and maintain!!

Resources