Second concurrent page method call not working - asp.net

I have two page method calls that I perform. One is a long request that performs alot of processing. The second one is used to retrieve the status of the first call. The problem is that the second call doesn't get a response until the first one has ended. Looking at fiddler, I see that the first page method call sends a request followed by the second page method sending a request back to back but I never get the second request in visual studio until the first ends. I thought ie7 allowed 2 concurrent requests at a time.

Related

What is the HTTP network flow used by Firestore JS SDK?

We use heavily Firebase JS SDK, and at one point encountered an issue where updates made by the SDK (using updateDoc) did not reach the server (though no network issues were detected). Same happened also for onSnapshot call (no updates retrieved from server, although there were updated).
In order to better understand what happens, we wanted to understand what happens on the network layer (in terms of HTTP requests flow), especially when we listen for updates, or perform updates.
The question is created in order to share our findings with other who are wondering about this, and in case some one from the Firebase team can confirm.
We performed updateDoc and onSnapshot calls using Firebase JS SDK for Firestore, and expected to be able to associate the network requests seen in Chrome Dev Tools to these SDK calls.
Below are the observations regarding the HTTP calls flow.
When we initialize Firestore, a Web Channel is created on which the traffic between the client and the server will go through.
The Web Channel uses gRPC over HTTP/2 (thanks Frank van Puffelen for the tip)
The default behavior is as follows:
Commands (e.g. start listen, set data) are sent using POST requests.
The response includes SID, which identifies the command lifecycle. Note that start listen commands (on different docs) will share the same SID.
SSE-based (server-side-events) GET requests with ±1 minute interval are used to listen on results from the commands (with the appropriate SID).
After the interval ends, a new SSE-based GET request will be created unless the command finished (e.g. stop listen was called, data was set).
If the command finished, and no additional GET request is required, a POST terminate request will be sent with the relevant SID.
The listen (onSnapshot) behavior by default is as follows:
Whenever onSnapshot is called, we have a POST request to a LISTEN endpoint to addTarget which returns immediately with SID. The addTarget includes the document path and a targetId generated by the client for later use. If there's already an open onSnapshot, it will use the same SID.
Following the POST response, a pending GET request is created which receives changes. It will use the same SID.
The pending GET requests receives changes are for all target (listened) objects (e.g. documents).
The GET request remains open for about a minute (sometimes less, sometimes more).
Every time new data arrives, it is processed, but the GET request remains open until it's interval (±1 minute) has passed.
The GET request also receives the initial values once a target was added (via onSnapshot).
Once the interval has passed, the pending GET request ends (and we can see the response).
Once it ends, it is replaced by a new pending GET request, which behaves the same way.
Whenever we unsubscribe from a listener, there is a POST request to a LISTEN endpoint to removeTarget.
After unsubscribing from all targets, the GET request will still end after the interval completes. However, no new GET request will initiate, and a terminate POST request will be sent.
Setting data for a doc (setDoc) behaves as follows:
A POST request is sent to the backend in order to start a session and receive SID.
A pending GET request is created, on which we aim to receive updates when the data was set in the backend. The pending GET request leaves for ±1 minute.
Another POST request is created with the update details (e.g. update field f1 on document d1 ).
Once the Firestore backend has updated the data, we'll receive the update operation details on the GET request, which will remain pending until the ±1 minute interval completes.
After the interval completes, and if the data was stored in the backend DB, the GET request will end, and a terminate POST request will be sent.
With long-polling (set by experimentalForceLongPolling) we have the following notable changes:
Every time there's a change in an object we listen on, the pending GET request is ended and a new one is created instead.
Instead of ±1 minute GET request, we have a ±30 seconds GET request followed by a ±15 seconds GET request.

what is difference between PUT and POST , why PUT is considered idempotent?

It is said everywhere[after reading many posts] that PUT is idempotent, means multiple requests with same inputs will produce same result as the very first request.
But, if we put same request with same inputs with POST method, then again, it will behave as PUT.
So, what is the difference in terms of Idempotent between PUT and POST.
The idea is that there should be a difference between POST and PUT, not that there is any. To clarify, the POST request should ideally create a new resource, whereas PUT request should be used to update the existing one. So, a client sending two POST requests would create two resources, whereas two PUT requests wouldn't (or rather shouldn't) cause any undesirable change.
To go into more detail, idempotency means that in an isolated environment multiple requests from the same client does not have any effect on the state of the resource. If request from another client changes the state of the resource, than it does not break the idempotency principle. Although, if you really want to ensure that put request does not end up overriding the changes by another simultaneous request from different client, you should always use etags. To elaborate, put request should always supply an etag (it got from get request) of the last resource state, and only if the etag is latest the resource should be updated, otherwise 412 (Precondition Failed) status code should be raised. In case of 412, client is suppose to get the resource again, and then try the update. According to REST, this is vital to prevent race conditions.
According to
W3C(http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html),
'Methods can also have the property of "idempotence" in that (aside
from error or expiration issues) the side-effects of N > 0 identical
requests is the same as for a single request.'

In Mate, Sending two or more requests to the server simultaneously?

I'm using Mate's RemoteObjectInvoker to call methods in my FluorineFX based API. However, all requests seem to be sent to the server sequentiality. That is, if I dispatch a group of messages at the same time, the 2nd one isn't sent until the first returns. Is there anyway to change this behavior? I don't want my app to be unresponsive while a long request is processing.
This thread will help you to understand what happens (it talks about blazeds/livecylce but I assume that Fluorine is using the same approach). In a few words what happens is:
a)Flash player is grouping all your calls in one HTTP post.
b)The server(BlazeDs,Fluorine etc) receives the request and starts to execute the methods serially, one after another.
Solutions
a)Have one HTTP post per method, instead of one HTTP post containing all the AMF messages. For that you can use HTTPChannel instead of AMFChannels (internally it is using flash.net.URLLoader instead of flash.net.NetConnection). You will be limited to the maximum number of parallel connection defined by your browser.
b)Have only one HTTP post but implement a clever solution on the server (it will cost you a lot of development time). Basically you can write your own parallel processor and use message consumers/publishers in order to send the result of your methods to the client.
c)There is a workaround similar to a) on https://bugs.adobe.com/jira/browse/BLZ-184 - create your remoteobject by hand and append a random id at the end of the endpoint.

Lose changed data in session

Our asp.net 2.0 application has a very long process (synchronized) before sending response back to client. I observed that a second request, exactly same the initial one, was sent after client IE8 waited response for a long period of time while our application was still processing the first request.
I use page session with predefined key to store a flag when the initial request arrives and then starts long process while client IE waits for the response, so if second request comes in, our application checks the session value. After our application sets the session flag and starts processing, I use Fiddler “Abort Session” to abort the initial request, right away the second request (same as the first one) is sent automatically, but session value set earlier seems no longer exist.
Any thoughts?
When the second request comes in during your ongoing process isn't it overwritting your current request's value since it is only storing one item? Assuming both requests are coming in under the same session.
Maybe consider storing a list of items so that you can add the second item to your list of flags and then find any previous items and delete them.
Maybe kill the request currently in the session before starting the second requests session?
I don't really understand your problem / solution all that well but hopefully that helps.
Edit based on your comment:
If it no longer exists it's probably due to your session timing out and wiping the values so the second one wouldn't be able to access it. Is the second connection coming in under the exact same session? Compare the Session IDs in both cases. Also check your timeout.
You could also store this information in your application Cache that has a real long expiry. Have a dictionary with the key being the session ID or even the user if you only want one process per user and then store your value. This when the second request comes in by the same user, you will be able to find it regardless of session ID. Just make sure you are clearing once your process is complete.

Notifying the user after a long Ajax task when they might be on a different page

I have an Ajax request to a web service that typically takes 30-60 seconds to complete. In some cases it could take as long as a few minutes. During this time the user can continue working on other tasks, which means they will probably be on a different page when the task finishes.
Is there a way to tell that the original request has been completed? The only thing that comes to mind is to:
wrap the web service with a web service of my own
use my web service to set a flag somewhere
check for that flag in subsequent page requests
Any better ways to do it? I am using jQuery and ASP.Net, if it matters.
You could add another method to your web service that allows you to check the status of a previous request. Then you can use ajax to poll the web service every 30 seconds or so. You can store the request id or whatever in Session so your ajax call knows what request ID to poll no matter what page you're on.
I would say you'd have to poll once in a while to see if request has ended and show some notifications, like this site does with badges for example.
At first make your request return immediately with something like "Started processing...". Then use a different request to poll for the result. It is not good neither for the server nor the client's browser to have long open HTTP sessions. Moreover the user should be informed and educated that he is starting a request that could take some time to complete.
To display the result you could have a"notification area" in all of your web pages. Alternatively you could have a dedicated page for this and instruct the user to navigate there. As others have suggested you could use polling to get the result.
You could use frames on your site, and perform all your long AJAX requests in an invisible frame. Frames add a certain level of pain to development, but might be the answer to your problems.
The only other way I could think of doing it is to actually load the other pages via an AJAX request, such that there are no real page reloads - this would mean that the AJAX requests aren't interrupted, but may cause issues with breaking browser functionality (back/forward, bookmarking, etc).
Since web development is stateless (you can't set a trigger/event on a server to update the client), the viable strategy is to setup up a status function that you can intermittently call using a javascript timer to check whether your code has finished executing. When it finishes, you can update your view.

Resources