How do I handle use 100 Continue in a REST web service? - asp.net

Some background
I am planning to writing a REST service which helps facilitate collaboration between multiple client systems. Similar to how git or hg handle things I want the client to perform all merging locally and for the server to reject new changes unless they have been merged with existing changes.
How I want to handle it
I don't want clients to have to upload all of their change sets before being told they need to merge first. I would like to do this by performing a POST with the Expect 100 Continue header. The server can then verify that it can accept the change sets based on the header information (not hard for me in this case) and either reject the request or send the 100 Continue status through to the client who will then upload the changes.
My problem
As far as I have been able to figure out so far ASP.NET doesn't support this scenario, by the time you see the request in your controller actions the POST body has normally already been completely uploaded. I've had a brief look at WCF REST but I haven't been able to see a way to do it there either, their conditional PUT example has the full request body before rejecting the request.
I'm happy to use any alternative framework that runs on .net or can easily be made to run on Windows Azure.

I can't recommend WcfRestContrib enough. It's free, and it has a lot of abilities.
But I think you need to use OpenRasta instead of WCF in order to do what you're wanting. There's a lot of stuff out there on it, like wiki, blog post 1, blog post 2. It might be a lot to take in, but it's a .NET framework thats truly focused on being RESTful, and not RPC like WCF. And it has the ability work with headers, like you asked about. It even has PipelineContributors, which have access to the whole context of a call and can halt execution, handle redirections, or even render something different than what was expected.
EDIT:
As far as I can tell, this isn't possible in OpenRasta after all, because "100 continue is usually handled by the hosting environment, not by OR, so there’s no support for it as such, because we don’t get a chance to respond in the asp.net pipeline"

Related

Can I set up a mock server by cloning real server responses?

TL;DR
Is there a tool can record all the network activity as I visit a website and create a mock server that responds to those requests with the same responses?
I'm investigating ways of mocking the complex backend for our React application. We're currently developing against the real backend (plus test/staging environments). I've looked around a bit and it looks there are a number of tools for mocking individual endpoints/features and sending the rest through to the real API (Mirage is leading the pack at the moment).
However, the Platonic ideal would be to mock the entire server so that a front end dev can work without an internet connection (again: Platonic ideal). It's a crazy lofty goal, I know this. And of course it would require mocking not only our backend but also requests any 3rd-party data sources. And of course the data would be thin and dumb and stale. But this is just for ultra-speedy front end development, it's just mocking. The data doesn't need to be rich, it'll be up to us to make it as useful/realistic as we need it to be.
Probably the quickest way would be to recreate the responses the backend is already sending, and then modifying as needed for new features or features under test etc.
To do this, we might go into Chrome DevTools and recreate everything on the network tab. Mock every request that was made by hardcoding response that returned. Taking it from there, do smart things like use url pattern matching to return a simple placeholder image for any request to get a user's avatar.
What I want to know is: is there any tool out there that does this automatically? That can watch as I load the site, click a bunch of stuff, take a bunch of actions, and spit out or set up a mock that recreates all the responses? And then we could edit any of them as we saw fit to simplify.
Does something like this exist? Maybe it's a browser tool. Maybe it's webpack middleware. Maybe it's a magic rooster.
PS. I imagine this may not be a specific, actionable enough question for SO. I'll understand if it's closed, but I'd really appreciate being directed somewhere where such questions/discussions would fit? I'm new enough to this world that SO is all I know!
There is a practice called service virtualization - a subset of the test double family.
Wikipedia has a list of tools you can use to do that. Here a couple of examples from that list:
Open Source Wiremock will let to record the mocks and edit the responses programmaticaly
Commercial Traffic Parrot will let you record the mocks and edit the responses via a UI and/or programatically
https://mswjs.io/ can mock all the requests for you. It intercepts all your client`s requests and returns your defined mock data.

use webservice in same project or handle it with code?

This is a theoretical question.
imagine an aspnet website. by clicking a button site sends mail.now:
I can send mail async with code
I can send mail using QueueBackgroundWorkItem
I can call a ONEWAY webservice located in same website
I can call a ONEWAY webservice located in ANOTHER website (or another subdomain)
none of above solutions wait for mail operation to be completed.so they are fine.
my question is why I should use service solution instead of other solutions. is there an advantage ?
4th solution adds additional tcpip traffic to use service its not efficient right ?
if so, using service under same web site (3rd solution) also generates additional traffic. is that correct ?
I need to understand why people using services under same website ? Is there any reason besides make something available to ajax calls ?
any information would be great. I really need to get opinions.
best
The most appropriate architecture will depend on several factors:
the volume of emails that needs to be sent
the need to reuse the email sending capability beyond the use case described
the simplicity of implementation, deployment, and maintenance of the code
Separating out the sending of emails in a service either in the same or another web application will make it available to other applications and from client side code. It also adds some complexity to the code calling the service as it will need to deal with the case when the service is not available and handle errors that may occur when placing the call.
Using a separate web application for the service is useful if the volume of emails sent is really large as it allows to offload the work to one or servers if needed. Given the use case given (user clicks on a button), this seems rather unlikely, unless the web site will have really large traffic. Creating a separate web application adds significant development, deployment and maintenance work, initially and over time.
Unless the volume of emails to be sent is really large (millions per day) or there is a need to reuse the email capability in other systems, creating the email sending function within the same web application (first two options listed in the question) is almost certainly the best way to go. It will result in the least amount of initial work, is easy to deploy, and (perhaps most importantly) will be the easiest to maintain.
An important concern to pay significant attention to when implementing an email sending function is the issue of robustness. Robustness can be achieved with any of the possible architectures and is somewhat of an different concern as the one emphasized by the question. However, it is important to consider the proper course of action needed if (1) the receiving SMTP refuses the take the message (e.g., mailbox full; non-existent account; rejection as spam) and (2) an NDR is generated after the message is sent (e.g., rejection as spam). Depending on the kind of email sent, it may be OK to ignore these errors or some corrective action may be needed (e.g., retry sending, alert the user at the origination of the emails, ...)

Best practices - When to use Server / Client code

I was searching for information about one of my doubts, but I couldn't find any. I'm working in an ASP.NET site and using AJAX to require data, since I'm currently working on my own, I don't know web programming's best practices.
I usually get all the information I need from the server and use Javascript to display / Modify it and AJAX to send it back to the server. A friend of mine uses PHP for most part of the programming, He rarelly uses any javascript and he told me it's way faster this way, since it does not consume the client's resources.
The basic question actually is:
According to the best practices, is it better for the server just to provide the data needed for the
application or is better you use the server for more than this?
That is going to depend on the expected amount of traffic for the site, the amount of content being generated, and the expectations of the end-user.
In a high-traffic site, it is actually "faster" for the end-user if you let javascript generate a portion of the content on the client side. Also, you can deliver a better user experience with long load times through client side scripting than you can if the content is loaded completely on the server.
In most cases you would need at least some backend code. E.g. when validating user input or when retrieving information from a real persistent database. Or what about when somebody has javascript disabled in his user-agent or somebody with a screenreader or searchengine crawlers?
IMHO you should at least (again in most cases) have the backend code which is able to do all the work and spit out a full webpage to the client. In addition to this you can add javascript functionality to make the user interface "smoother" by for example validating user data before submitting it to the server (remember to ALWAYS also check on the serverside) or by loading partial html (AJAX).
The point about being faster or using less resources when doing it serverside doesn't make much sense. Even if it does that it doesn't matter (but again I highly doubt this statement). If you use clientside scripting to only load parts that are needed it would rather use less resources on both the client- and the serverside.

Is it dangerous if a web resource POSTs to itself?

While reading some articles about writing web servers using Twisted, I came across this page that includes the following statement:
While it's convenient for this example, it's often not a good idea to
make a resource that POSTs to itself; this isn't about Twisted Web,
but the nature of HTTP in general; if you do this, make sure you
understand the possible negative consequences.
In the example discussed in the article, the resource is a web resource retrieved using a GET request.
My question is, what are the possible negative consequences that can arrive from having a resource POST to itself? I am only concerned about the aspects related to the HTTP protocol, so please ignore the fact that I mentioned about Twisted.
The POST verb is used for making a new resource in a collection.
This means that POSTing to a resource has no direct meaning (POST endpoints should always be collections, not resources).
If you want to update your resource, you should PUT to it.
Sometimes, you do not know if you want to update or create the resource (maybe you've created it locally and want to create-or-update it). I think that in that case, the PUT verb is more appropriate because POST really means "I want to create something new".
There's nothing inherently wrong about a page POSTing back to itself - in fact, many of the widely-used frameworks (ASP.NET, etc.) use that method to handle various events that happen on the client - some data is posted back to the same page where the server processes it and sends a new reponse.

Who is calling my WebService?

I have a web service that is on an internal server. It can be called from any website on our network.
More and more developers are starting to use it. Current probably 20+ pages use this service, and the number is growing fast. I can see a year from now, someone asking what pages are using this service and what methods.
I would like to log the url of the pages that use my web service as the request come in.
It would also be nice to know the method they are calling.I need to do something in such a way, that it does not affect the client web sites.My first thought was that I could write some code in the global.asax.
I have added some code to the Application_BeginRequest to log the request object details, but there does not appear to be anything about the requesting url.
What am I missing? Should I be looking at a different object?
Thanks.
Without disrupting existing users this is going to be difficult. The httpContect.Current.RequestUrl will just return the URL used to call your web service, not which web page called it.
The closest you can do without disrupting existing apps and forcing developers to change them is to grab the HttpContext.Current.Request.UserHostAddress, so you can at least get the IP of the machine calling your service.
Beyond this, what you might want to consider is adding a parameter to your functions for "CallingApp" and then log that in your code. That's pretty much what we did once re realized that we needed to know which apps are calling our service. We actually have an application monitoring service that uses a GUID for every new app we develop, and we pass that GUID to any web service. It[s extra work but to us it was critical because it allows us to know which apps will be affected when we need to perform updates or take the app server down for maintenance.
Edit - added
As a side note, at the point we realized we needed to track this, we had already been using web services for about a year. When faced with the same problem, we created a new set of web services, and included the extra field for the calling app in all of the new services, and then slowly went back and changed the older programs to point to the new services.
IN retrospect, we wish we had known we would need to do this up front because it created a lot of extra work. I'm guessing you'll be facing something similar if you really want to know exactly who is calling your services.
The only thing you can probably retrieve from the consumer is the IP address without changing your interface.
If you can change this you could do this e.g. by adding authentication and logging who is calling what, or by having some simple "token" principle.
However both methods require you to change the interface and therefore break backwards compatibility - which you should never do.
By always ensuring both back and forward compatibility you should not need to know exactly who is calling your service, but only that it is actually used.
#David Stratton
Thanks for your help. I think your suggestions were great. I accually did something very different, after your answer gave me some new ideas.
I should have mentioned that I was generating the web proxy that most of my users were using to make calls against my web service. My client in general do NOT use the proxy that Visual Studio creates.
Here is what did:
I generated my web proxy client again, and added calls to log the httpcontext of the client before every call. Because the proxy is running on the client, he had access to everything I needed. That allowed me to record everything about the client and the specific call they were making. I realize this would not work for most cases. But all of my clients are internal web sites.
It also had the advantage in that the clients did not have to modify their code at all. I just gave them all a new DLL. Problem solved. I get all the tracking data I want, and they did not have to modify their code.
I was stuck trying to solve the problem from the web service's point of view.
I realize that there is still a whole in this implementation, because someone does not have to use my client proxy to call my service. I guess I could force that at some point in the future. For now, they could let Visual Studio genereate a web proxy for my service. However, if they do that I guess I don't care. That is not the recommened way to call my service. I think the only one doing that is an ASP.NET 1.1 web site. When they upgrade, they will probably switch to my generated proxy.
Without implementing some sort of authentication, there isn't a guraenteeted way of knowing exactly who is calling your service - web metrics are the only way you can gauge what volume of traffic is hitting your service.
I'm sure you already know this but the whole point of a web service isn't to know or care who is calling it.
I have successfully used ...
Dim strReferrer As String = HttpContext.Current.Request.UrlReferrer.AbsoluteUri
to get the calling page that called my WEB API 2 Web Service.

Resources